source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
495,657
I have followed instructions in this post How to insert text after a certain string in a file? but I suspect the instructions are not valid for OSX. I want to add some text into a source file using Bash sed '/pattern/a some text here' ${sourceFile} but when I run the command I get "/pattern/a some text here": command a expects \ followed by text edit I have created a new file called infile with a single line pattern and a bash script #!/bin/bash sed '/pattern/a\ text to insert' infile running the script echos "pattern" to the console but doesn't insert the text edit I have also tried for the bash script #!/bin/bash sed '/pattern/a\ add one line\ \\and one more' infile and the terminal echos pattern add one line \and one more but infile still has single line pattern
This file contains bytes C2 96 , which are the UTF-8 encoding of codepoint U+0096. That codepoint is one of the C1 control characters commonly called SPA "Start of Guarded Area" (or "Protected Area"). That isn't a useful character for any modern system, but it's unlikely to be harmful that it's there. The original source for this was likely a byte 0x96 in some single-byte 8-bit encoding that has been transcoded incorrectly somewhere along the way. Probably this was originally a Windows CP1252 en dash "–", which has byte value 96 in that encoding - most other plausible candidates have the control set at positions 80-9F - which has been translated to UTF-8 as though it was latin-1 ( ISO/IEC 8859-1 ), which is not uncommon. That would lead to the byte being interpreted as the control character and translated accordingly as you've seen. You can fix this file with the iconv tool, which is part of glibc. iconv -f utf-8 -t iso-8859-1 < mwe.txt | iconv -f cp1252 -t utf-8 produces a correct version of your minimal example for me. That works by first converting the UTF-8 to latin-1 (inverting the earlier mistranslation), and then reinterpreting that as cp1252 to convert it back to UTF-8 correctly. It does depend on what else is in the real file, however. If you have characters outside Latin-1 elsewhere it will fail because it can't encode those correctly at the first step. If you don't have iconv, or it doesn't work for the real file, you can replace the bytes directly using sed: LC_ALL=C sed -e $'s/\xc2\x96/\xe2\x80\x93/g' < mwe.txt This replaces C2 96 with the UTF-8 en dash encoding E2 80 93 . You could also replace it with e.g. a hyphen or two by changing \xe2\x80\x93 into -- . You can grep in a similar fashion. We're using LC_ALL=C to make sure we're reading the actual bytes, and not having grep interpret things: LC_ALL=C grep -R $'\xc2\x96` . will list out everywhere under this directory those bytes appear. You may want to limit it to just text files if you have mixed content around, since binary files will include any pair of bytes fairly often.
{ "source": [ "https://unix.stackexchange.com/questions/495657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269365/" ] }
495,790
When I display the manual for pwd command, it says that long options like --physical are supported $ man pwd PWD(1) User Commands PWD(1) NAME pwd - print name of current/working directory SYNOPSIS pwd [OPTION]... DESCRIPTION Print the full filename of the current working directory. -L, --logical use PWD from environment, even if it contains symlinks -P, --physical avoid all symlinks However, it fails when I type the following $ pwd --physical -bash: pwd: --: invalid option pwd: usage: pwd [-LP] Why are long options not working for me? I'm using RHEL 6.4. No alias for pwd is configured. Looks like it's standard pwd: $ which pwd /bin/pwd
bash has a built-in command pwd which is what you are using when you simply type pwd into your shell. To get the pwd as described by the manpage, you need force use of the external command. You can do this by specifying the full path to the executable ( /bin/pwd in your case) or by prepending env before the line: env pwd , which starts the env command which can be used to add settings to the environment (but which is not done here) and then env starts the command specified. As env doesn't have a builtin pwd , the "real" /bin/pwd is executed. The advantage of the builtin pwd in bash is that bash keeps track of the current directory, so getting the value is at zero cost, whereas the external command needs to search up through the filesystem to determine the path, which is much more IO intensive.
{ "source": [ "https://unix.stackexchange.com/questions/495790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109300/" ] }
495,797
I have a csv file having many rows and its header is: DateTime,CallEndTime,KeywordTagTexts,TotalDuration Position of the the header keys changes, therefore I have calculated positions using sed duration=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'TotalDuration' | cut -d: -f1); callend=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'CallEndTime' | cut -d: -f1); callstart=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'DateTime' | cut -d: -f1); The value in DateTime is "2018-12-18 18:36:55" in date time format and TotalDuration is in seconds. I want to add the value DateTime + TotalDuration to CallEndTime,
bash has a built-in command pwd which is what you are using when you simply type pwd into your shell. To get the pwd as described by the manpage, you need force use of the external command. You can do this by specifying the full path to the executable ( /bin/pwd in your case) or by prepending env before the line: env pwd , which starts the env command which can be used to add settings to the environment (but which is not done here) and then env starts the command specified. As env doesn't have a builtin pwd , the "real" /bin/pwd is executed. The advantage of the builtin pwd in bash is that bash keeps track of the current directory, so getting the value is at zero cost, whereas the external command needs to search up through the filesystem to determine the path, which is much more IO intensive.
{ "source": [ "https://unix.stackexchange.com/questions/495797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332616/" ] }
495,899
One of the tutorials I've been following briefly stated that cd . has no use. When trying to replicate issue shown by OP in Symbolic link recursion - what makes it “reset”? , I also tried cd . , which showed the same effect OP described (growing $PWD variable), which can be countered with cd -P . This makes me wonder, is there any case where one does in fact would want to use cd . ?
The path of the directory could have changed since the last command was executed, and without cd . the bash and ksh93 shells will rely on the logical working directory described in the post linked in the question, so calling cd . which makes the shell issue the getcwd() syscall will ensure your current path is still valid. Steps to reproduce in bash: In a terminal tab issue mkdir ./dir_no_1; cd ./dir_no_1 In a different terminal tab issue mv dir_no_1 dir_no_2 In the first terminal tab issue echo $PWD and pwd . Notice that the directory has been externally renamed; the shell's environment has not been updated. Issue cd .; pwd; echo $PWD . Notice the value has been updated. ksh93, however, does not update the environment information, so cd . in ksh93 may in fact be useless. In /bin/dash on Ubuntu and other Debian-based systems, cd . returns dash: 3: cd: can't cd to . error, however cd -P . works (unlike in ksh93).
{ "source": [ "https://unix.stackexchange.com/questions/495899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85039/" ] }
495,958
I saw sed 's=.*/==' in context of sh script and I'm puzzled. I could not find in sed manual or web search (for sed s= ) how s is used, not s/// . Apart from s I see only one potential command here = (Print the current input line number), but in such case what the rest is doing... Running the command in shell produces same output as input for e.g echo 'jkfdsa=335r34' , whereas echo 'jkfdsa=335r34' | sed 's/=.*/==/' does replacement as per manual. Also slightly modifying command to e.g. echo 'jkfdsa=3' | sed 's798=.*/==/' gives sed: -e expression #1, char 11: unterminated 's' command , so original should have some correct meaning. What is it?
The = are alternative delimiters. These are used since the pattern contains a / (which is the more commonly used delimiter). Almost any character can be used as an alternative delimiter, so s@.*/@@ or s_.*/__ would have meant the same thing. With the ordinary delimiter, the sed expression could have been written as s/.*\/// (the literal / that the expression wants to match needs to be escaped here) or, possibly more readable, s/.*[/]// (most characters within a [...] character class are literal 1 ) What the sed expression does is to substitute anything that matches .*/ with nothing. This will have the effect of removing everything up to and including the last / character on the line. It will remove up to the last / (not the first) since .* does a greedy match of any sequence of any characters. Example: $ echo 'a/b/c' | sed 's/.*[/]//' c The unterminated 's' command error that you get when testing s798=.*/==/ is due to 7 being used as the delimiter for the s command. The expression s7.*/77 would have worked though. 1 ... apart from the characters that have special meaning within [...] such as ^ (at the start) and - (when not first, second after ^ , or last). The characters [ and ] also needs special treatment within [...] , but that goes outside the scope of this question. If this is used to get the filename at the end of a path in some string or shell variable, then the basename utility may do a better job of it (and also does the right thing if the path ends with a slash): $ basename a/b/c c $ basename a/b/c/ c Likewise, the standard shell parameter substitution ${variable##*/} would, assuming the variable contains no newlines, be equivalent in its effect to passing the value of $variable through the above sed expression in a command substitution, but many times faster. The variable substitution and the basename utility also copes with correctly handling pathnames containing newlines, which sed would not do (since it processes its input line by line).
{ "source": [ "https://unix.stackexchange.com/questions/495958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266260/" ] }
496,104
Is there a way to open() a file and cause it to shrink? One can, of course, open them in append-mode or seek to the end and write to cause them to grow. However, as far as I know, there is no method to shrink a file via typical unix-style system call interfaces. The only way to do so, as far as I know, is by faking it by creating a new shorter file and rename() it in place of the older one. I just wanted confirmation, because I saw an answer that implied that it was possible to make file editors that worked directly on a file instead of going through the process of making a new one and renaming it in place. I've always thought that the file api in libc and unix-style system call interfaces did not allow for the shrinking of files to ease implementation of filesystems and maybe avoid usage patterns that might contribute to fragmentation.
man -s 2 ftruncate says DESCRIPTION The truncate() and ftruncate() functions cause the regular file named by path or referenced by fd to be truncated to a size of precisely length bytes. ... CONFORMING TO POSIX.1-2001, POSIX.1-2008, 4.4BSD, SVr4 (these calls first appeared in 4.2BSD). it goes on to say that if you use ftruncate you must have opened the file for writing, and if you use truncate the file must be writable.
{ "source": [ "https://unix.stackexchange.com/questions/496104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189744/" ] }
496,179
I have an awk script, new.awk : BEGIN { FS = OFS = "," } NR == 1 { for (i = 1; i <= NF; i++) f[$i] = i } NR > 1 { begSecs = mktime(gensub(/[":-]/, " ", "g", $(f["DateTime"]))) endSecs = begSecs + $(f["TotalDuration"]) $(f["CallEndTime"]) = strftime("%Y-%m-%d %H:%M:%S", endSecs) } { print } I am calling this in shell awk new.awk sample.csv ... but I can see the changes in the terminal. How to make the change in-place in the file, like when using sed -i ?
GNU awk (commonly found on Linux systems), since version 4.1.0, can include an " awk source library" with -i or --include on the command line. One of the source libraries that is distributed with GNU awk is one called inplace : $ cat file hello there $ awk -i inplace '/hello/ { print "oh,", $0 }' file $ cat file oh, hello As you can see, this makes the output of the awk code replace the input file. The line saying there is not kept as the program does not output it. With an awk script in a file, you would use it like awk -i inplace -f script.awk datafile If the awk variable INPLACE_SUFFIX is set to a string, then the library would make a backup of the original file with that as a filename suffix. awk -i inplace -v INPLACE_SUFFIX=.bak -f script.awk datafile If you have several input files, each file with be individually in-place edited. But you can turn in-place editing off for a file (or a set of files) by using inplace=0 on the command line before that file: awk -i inplace -f script.awk file1 file2 inplace=0 file3 inplace=1 file4 In the above command, file3 would not be edited in place. For a more portable "in-place edit" of a single file, use tmpfile=$(mktemp) cp file "$tmpfile" && awk '...some program here...' "$tmpfile" >file rm "$tmpfile" This would copy the input file to a temporary location, then apply the awk code on the temporary file while redirecting to the original filename. Doing the operations in this order (running awk on the temporary file, not on the original file) ensures that the file meta-data (permissions and ownership) of the original file is not modified.
{ "source": [ "https://unix.stackexchange.com/questions/496179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332716/" ] }
496,461
I have installed python3.7 however I am not sure how to make it the default python. See below: ~/Documents/robosuite$ python3.7 Python 3.7.1 (default, Oct 22 2018, 11:21:55) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> KeyboardInterrupt >>> ~/Documents/robosuite$ python3 Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> I want python3.7 to show up when I use the command python3
Simple solution is edit .bashrc and put this line: alias python3=python3.7 Whenever you will write python3 it will replace it with python3.7 . Or you can use command update-alternatives which is preferred i.e: sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1 sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2 So here python3.7 will have higher priority then python3.6 . Then use: sudo update-alternatives --config python3 Press the enter key if you are satisfied
{ "source": [ "https://unix.stackexchange.com/questions/496461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333207/" ] }
496,982
I created some systemd services which basically works: location: /etc/systemd/system/multi-user.target.wants/publicapi.service content: [Unit] Description=public api startup script [Service] Type=oneshot RemainAfterExit=yes EnvironmentFile=-/etc/environment WorkingDirectory=/home/techops ExecStart=/home/techops/publicapi start ExecStop=/home/techops/publicapi stop [Install] WantedBy=multi-user.target When I try to restart the service as techops user in the command line, I get the following output: ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to start 'publicapi.service'. Multiple identities can be used for authentication: 1. Myself,,, (defaultuser) 2. ,,, (techops) Choose identity to authenticate as (1-2): I want that only techops can restart services and I want that this prompt does not appear when being logged in as techops. How can I do that? I read that there are different approaches with polkit-1 or sudoers, but I'm unsure. [UPDATE] 2019-01-27 4:40pm Thanks for this comprehensive answer to Thomas and Perlduck. It helped me to improve my knowledge of systemd. According to the approach to start the service without a password prompt, and I want to apologize that I did not emphasize the real problem enough: Actually, what is most important for me is that no other user than techops should stop or start the service. But at least with the first two approaches I can still run service publicapi stop and I get the prompt ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === again. When I choose the defaultuser and know the password, I could stop all the services. I want to deny this user from doing that, even if he has the password . Important background info to better understand why this is the more important part for me: The defaultuser is the only user which is exposed to ssh but this user cannot do anything else (except changing to other users if you have the password of these other users). But at the moment, he can start or stop the services, but this user must not to do this. If someone gets the password of defaultuser and logs in via ssh, then he could stop all the services at the moment. This is what I meant with "I want that only techops can restart services". Sorry, that I was no that exact at my initial question. I thought that sudoing the techops user would maybe bypass this problem, but it does not. The problem itself is not to run the command without password prompt. (I could easily do that as techops user when I just execute /home/techops/publicapi start ). The problem itself is to lock out the defaultuser from starting these services. And I hoped that any of the solutions could do that. I started with the approaches of Thomas. The approach with sudo works when I don't want to get asked for the password for the user techops when I execute the commands as explained, e.g. sudo systemctl start publicapi.service sudo systemctl stop publicapi.service The second approach does not work for me yet. I cannot start the service without password prompt ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === and I stall can login as defaultuser when I have the password of this user. With the third approach, the service does not even start at boot process anymore so I'm not sure if this approach is the right one for me at all. I cannot even able it with systemctl enable publicapi.service which leads me to the following error: Failed to enable unit: Unit file mycuisine-publicapi.service does not exist. The error does no occur when I move all the services back into /etc/systemd/system/ and execute systemctl enable publicapi.service . Then the service starts again at boot. All these approaches will more or less help to bypass the password prompt for the techops user but when I run service publicapi stop or systemctl stop publicapi with defaultuser, I can stop the services if I have the password. But my target is to lock out defaultuser from starting or stopping services at all.
To achieve that the user techops can control the service publicapi.service without giving a password, you have different possiblities. Which one is suitable for you cannot be answered as you have to choose on your own. The classical sudo approach is maybe the most used, as it is there for a long time. You would have to create e.g. the file as follows. Note that the drop-in directory /etc/sudoers.d is only active when #includedir /etc/sudoers.d is set in /etc/sudoers . But that should be the case if you are using a modern Ubuntu distribution. As root execute: cat > /etc/sudoers.d/techops << SUDO techops ALL= NOPASSWD: /bin/systemctl restart publicapi.service techops ALL= NOPASSWD: /bin/systemctl stop publicapi.service techops ALL= NOPASSWD: /bin/systemctl start publicapi.service SUDO Now you should be able to run the systemctl commands as user techops without giving a password by prepending sudo to the commands. sudo systemctl start publicapi.service sudo systemctl stop publicapi.service sudo systemctl restart publicapi.service The second method would be to use PolKit (was renamed from PolicyKit ) to allow the user techops to control systemd services. Depending on the version of polit , you can give normal users control over systemd units. To check the polkit version, just run pkaction --version . with polkit version 0.106 and higher , you can allow users to control specific systemd units. To do so, you could create a rule as root : cat > /etc/polkit-1/rules.d/10-techops.rules << POLKIT polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.systemd1.manage-units" && action.lookup("unit") == "publicapi.service" && subject.user == "techops") { return polkit.Result.YES; } }); POLKIT with polkit version 0.105 and lower : you can allow users to control systemd units. This unfortunately includes all systemd units and you might not want to do this. Not sure if there is a way to limit access to specific systemd units with version 0.105 or lower, but maybe someone else can clarify. To enable this, you could create a file as root : cat > /etc/polkit-1/localauthority/50-local.d/org.freedesktop.systemd1.pkla << POLKIT [Allow user techops to run systemctl commands] Identity=unix-user:techops Action=org.freedesktop.systemd1.manage-units ResultInactive=no ResultActive=no ResultAny=yes POLKIT In both cases you can run systemctl [start|stop|restart] publicapi.service as user techops without giving a password. In the latter case ( polkit <= 0.105 ) the user techops could control any systemd unit. A third option would be to make the service a user service, which does not need sudo or polkit configurations. This puts everything under the control of the user and only works if your actual service that is started with /home/techops/publicapi start can run without root privileges. First you have to enable lingering for the user techops . This is needed to startup the user service on boot. As root execute: loginctl enable-linger techops Next you have to move the systemd unit file into the techops user directory. As user techops execute the commands as follows. mkdir -p ~/.config/systemd/user cat > ~/.config/systemd/user/publicapi.service << UNIT [Unit] Description=public api startup script [Service] Type=oneshot RemainAfterExit=yes EnvironmentFile=-/etc/environment WorkingDirectory=/home/techops ExecStart=/home/techops/publicapi start ExecStop=/home/techops/publicapi stop [Install] WantedBy=default.target UNIT Note that the WantedBy has to be default.target as there is no multi-user.target in the user context. Now reload the configuration and enable the service. Again as user techops execute the commands. systemctl --user daemon-reload systemctl --user enable publicapi.service systemctl --user start publicapi.service In general you should place your systemd units in /etc/systemd/system/ not directly in /etc/systemd/system/multi-user.target.wants . When you execute systemctl enable publicapi.service a symbolic link will be created in etc/systemd/system/multi-user.target.wants or whatever target is specified for that unit. As already mentioned, if the service/process itself can be run without root privileges you should consider adding User=techops to your unit file to run the process with a non-privileged user account.
{ "source": [ "https://unix.stackexchange.com/questions/496982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3214/" ] }
497,146
I would like to register a URL scheme (or protocol) handler for my own custom URL protocol, so that clicking on a link with this custom protocol will execute a command on that URL. Which steps do I need to take to add this handler? Example: I want to open URLs like ddg://query%20terms in a new DuckDuckGo browser search. If this protocol already exists, I assume that the steps to override a handler don't differ much from the steps to create a new one. Yes, technically, this is just a URL scheme, not a protocol.
To register a new URL scheme handler with XDG, first create a Desktop Entry which specifies the x-scheme-handler/... MIME type: [Desktop Entry] Type=Application Name=DDG Scheme Handler Exec=open-ddg.sh %u StartupNotify=false MimeType=x-scheme-handler/ddg; Note that %u passes the URL (e.g. ddg://query%20terms ) as a single parameter, according to the Desktop Entry Specification . Once you have created this Desktop Entry and installed it (i.e. put it in the local or system applications directory for XDG, like ~/.local/share/applications/ or /usr/share/applications/ ), then you must register the application with the MIME type (assuming you had named your Desktop Entry ddg-opener.desktop ): xdg-mime default ddg-opener.desktop x-scheme-handler/ddg A reference implementation of the ddg-open.sh handler: #!/usr/bin/env bash # bash and not just sh because we are using some bash-specific syntax if [[ "$1" == "ddg:"* ]]; then ref=${1#ddg://} #ref=$(python -c "import sys, urllib as ul; print ul.unquote_plus(sys.argv[1])" "$ref") # If you want decoding xdg-open "https://duckduckgo.com/?q=$ref" else xdg-open "$1" # Just open with the default handler fi
{ "source": [ "https://unix.stackexchange.com/questions/497146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13308/" ] }
497,526
I am programming a Linux shell script that will print status banners during its execution only if the proper tool, say figlet , is installed (this is: reachable on system path ). Example: #!/usr/bin/env bash echo "foo" figlet "Starting" echo "moo" figlet "Working" echo "foo moo" figlet "Finished" I would like for my script to work without errors even when figlet is not installed . What could be a practical method ?
My interpretation would use a wrapper function named the same as the tool; in that function, execute the real tool if it exists: figlet() { if command -p figlet >/dev/null 2>&1 then command figlet "$@" else : fi } Then you can have figlet arg1 arg2... unchanged in your script. @Olorin came up with a simpler method: define a wrapper function only if we need to (if the tool doesn't exist): if ! command -v figlet > /dev/null; then figlet() { :; }; fi If you'd like the arguments to figlet to be printed even if figlet isn't installed, adjust Olorin's suggestion as follows: if ! command -v figlet > /dev/null; then figlet() { printf '%s\n' "$*"; }; fi
{ "source": [ "https://unix.stackexchange.com/questions/497526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
497,532
I had two namespaces (NS) in my NVMe SSD (Samsung) and deleted both to create just one, but Ubuntu is not able to recognize the device upon deleting.  How do I recover the drive now? Command used to delete: sudo nvme delete-ns /dev/nvme0n1 -n 1 Ubuntu 18.04.1 LTS Kernel 4.15
My interpretation would use a wrapper function named the same as the tool; in that function, execute the real tool if it exists: figlet() { if command -p figlet >/dev/null 2>&1 then command figlet "$@" else : fi } Then you can have figlet arg1 arg2... unchanged in your script. @Olorin came up with a simpler method: define a wrapper function only if we need to (if the tool doesn't exist): if ! command -v figlet > /dev/null; then figlet() { :; }; fi If you'd like the arguments to figlet to be printed even if figlet isn't installed, adjust Olorin's suggestion as follows: if ! command -v figlet > /dev/null; then figlet() { printf '%s\n' "$*"; }; fi
{ "source": [ "https://unix.stackexchange.com/questions/497532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334127/" ] }
497,543
The classic scenario with Operator Precedence, you have a line like : (cd ~/screenshots/ && ls screenshot* | head -n 5) And you don't know if it's parsed ((A && B) | C) or (A && B | C) ... The almost official documentation found here doesn't list the pipe in the list so I cannot simply check in the table. Furthermore in bash, ( is not only for changing the order of operations but creates a subshell , so I'm not 100% sure this lines is the equivalent of the previous line : ((cd ~/screenshots/ && ls screenshot*) | head -n 5) More generally, how to know the AST of a bash line? In python I have a function that gives me the tree so that I can easily double check the order of operation.
cd ~/screenshots/ && ls screenshot* | head -n 5 This is equivalent to cd ~/screenshots && { ls screenshot* | head -n 5 ; } (the braces group commands together without a subshell ). The precedence of | is thus higher (binds tighter) than && and || . That is, A && B | C and A || B | C always mean that only B's output is to be given to C . You can use (...) or { ... ; } to join commands together as a single entity for disambiguation if necessary: { A && B ; } | C A && { B | C ; } # This is the default, but you might sometimes want to be explicit You can test this out using some different commands. If you run echo hello && echo world | tr a-z A-Z then you'll get hello WORLD back: tr a-z A-Z upper-cases its input , and you can see that only echo world was piped into it, while echo hello went through on its own. This is defined in the shell grammar , although not terribly clearly: the and_or production (for && / || ) is defined to have a a pipeline in its body, while pipeline just contains command , which doesn't contain and_or - only the complete_command production can reach and_or , and it only exists at the top level and inside the bodies of structural constructs like functions and loops. You can manually apply that grammar to get a parse tree for a command, but Bash doesn't provide anything itself. I don't know of any shell that does beyond what's used for their own parsing. The shell grammar has a lot of special cases defined only semi-formally and it can be quite a mission to get right. Even Bash itself has sometimes gotten it wrong , so the practicalities and the ideal may be different. There are external parsers that attempt to match the syntax and produce a tree, and of those I will broadly recommend Morbig , which attempts to be the most reliable.
{ "source": [ "https://unix.stackexchange.com/questions/497543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/256871/" ] }
497,639
All the other questions on the SE network deal with scenarios where either the date is assumed to be now ( Q ) or where only a date is specified ( Q ). What I want to do is supply a date and time, and then subtract a time from that. Here is what I tried first: date -d "2018-12-10 00:00:00 - 5 hours - 20 minutes - 5 seconds" This results in 2018-12-10 06:39:55 - It added 7 hours. Then subtracted 20:05 minutes. After reading the man and info page of date , I thought I have it fixed with this: date -d "2018-12-10T00:00:00 - 5 hours - 20 minutes - 5 seconds" But, same result. Where does it even get the 7 hours from? I tried other dates as well because I thought maybe we had 7200 leap seconds on that day, who knows lol. But same results. A few more examples: $ date -d "2018-12-16T00:00:00 - 24 hours" +%Y-%m-%d_%H:%M:%S 2018-12-17_02:00:00 $ date -d "2019-01-19T05:00:00 - 2 hours - 5 minutes" +%Y-%m-%d_%H:%M:%S 2019-01-19_08:55:00 But here it becomes interesting. If I omit the time on input, it works fine: $ date -d "2018-12-16 - 24 hours" +%Y-%m-%d_%H:%M:%S 2018-12-15_00:00:00 $ date -d "2019-01-19 - 2 hours - 5 minutes" +%Y-%m-%d_%H:%M:%S 2019-01-18_21:55:00 $ date --version date (GNU coreutils) 8.30 What am I missing? Update: I've added a Z at the end, and it changed the behaviour: $ date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S 2019-01-19_04:00:00 I'm still confused though. There is not much about this in the GNU info page about date. I'm guessing this is a timezone issue, but quoting The Calendar Wiki on ISO 8601 : If no UTC relation information is given with a time representation, the time is assumed to be in local time. Which is what I want. My local time is set correctly too. I'm not sure why date would mess with the timezone at all in this simple case of me supplying a datetime and wanting to subtract something off of it. Shouldn't it subtract the hours from the date string first? Even if it does convert it to a date first and then does the subtraction, if I leave out any subtractions I get exactly what I want: $ date -d "2019-01-19T05:00:00" +%Y-%m-%d_%H:%M:%S 2019-01-19_05:00:00 So IF this truly is a timezone issue, where does that madness come from?
That last example should have clarified things for you: timezones . $ TZ=UTC date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S 2019-01-19_03:00:00 $ TZ=Asia/Colombo date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S 2019-01-19_08:30:00 As the output clearly varies by the timezone, I'd suspect some non-obvious default taken for a time string without a timezone specified. Testing a couple of values, it seems to be UTC-05:00 , though I'm not sure what that is. $ TZ=UTC date -d "2019-01-19T05:00:00 - 2 hours" +%Y-%m-%d_%H:%M:%S%Z 2019-01-19_08:00:00UTC $ TZ=UTC date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S%Z 2019-01-19_03:00:00UTC $ TZ=UTC date -d "2019-01-19T05:00:00" +%Y-%m-%d_%H:%M:%S%Z 2019-01-19_05:00:00UTC It's only used when performing date arithmetic. It seems the issue here is that - 2 hours is not taken as arithmetic, but as a timezone specifier : # TZ=UTC date -d "2019-01-19T05:00:00 - 2 hours" +%Y-%m-%d_%H:%M:%S%Z --debug date: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC-02 date: parsed relative part: +1 hour(s) date: input timezone: parsed date/time string (-02) date: using specified time as starting value: '05:00:00' date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' date: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' = 1547881200 epoch-seconds date: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns), date: new time = 1547884800 epoch-seconds date: timezone: TZ="UTC" environment value date: final: 1547884800.000000000 (epoch-seconds) date: final: (Y-M-D) 2019-01-19 08:00:00 (UTC) date: final: (Y-M-D) 2019-01-19 08:00:00 (UTC+00) 2019-01-19_08:00:00UTC So, not only is no arithmetic being done, there seems to be a daylight savings 1 hour adjustment on the time, leading to a somewhat nonsensical time for us. This also holds for addition: # TZ=UTC date -d "2019-01-19T05:00:00 + 5:30 hours" +%Y-%m-%d_%H:%M:%S%Z --debug date: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC+05:30 date: parsed relative part: +1 hour(s) date: input timezone: parsed date/time string (+05:30) date: using specified time as starting value: '05:00:00' date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=+05:30' date: '(Y-M-D) 2019-01-19 05:00:00 TZ=+05:30' = 1547854200 epoch-seconds date: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns), date: new time = 1547857800 epoch-seconds date: timezone: TZ="UTC" environment value date: final: 1547857800.000000000 (epoch-seconds) date: final: (Y-M-D) 2019-01-19 00:30:00 (UTC) date: final: (Y-M-D) 2019-01-19 00:30:00 (UTC+00) 2019-01-19_00:30:00UTC Debugging a bit more, the parsing seems to be: 2019-01-19T05:00:00 - 2 ( -2 being the timezone), and hours (= 1 hour), with an implied addition. It becomes easier to see if you use minutes instead: # TZ=UTC date -d "2019-01-19T05:00:00 - 2 minutes" +%Y-%m-%d_%H:%M:%S%Z --debug date: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC-02 date: parsed relative part: +1 minutes date: input timezone: parsed date/time string (-02) date: using specified time as starting value: '05:00:00' date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' date: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' = 1547881200 epoch-seconds date: after time adjustment (+0 hours, +1 minutes, +0 seconds, +0 ns), date: new time = 1547881260 epoch-seconds date: timezone: TZ="UTC" environment value date: final: 1547881260.000000000 (epoch-seconds) date: final: (Y-M-D) 2019-01-19 07:01:00 (UTC) date: final: (Y-M-D) 2019-01-19 07:01:00 (UTC+00) 2019-01-19_07:01:00UTC So, well, date arithmetic is being done, just not the one that we asked for. ¯\(ツ)/¯
{ "source": [ "https://unix.stackexchange.com/questions/497639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296862/" ] }
497,674
I'm trying to use grep to show only lines containing either of the two words, if only one of them appears in the line, but not if they are in the same line. So far I've tried grep pattern1 | grep pattern2 | ... but didn't get the result I expected.
A tool other than grep is the way to go. Using perl, for instance, the command would be: perl -ne 'print if /pattern1/ xor /pattern2/' perl -ne runs the command given over each line of stdin, which in this case prints the line if it matches /pattern1/ xor /pattern2/ , or in other words matches one pattern but not the other (exclusive or). This works for the pattern in either order, and should have better performance than multiple invocations of grep , and is less typing as well. Or, even shorter, with awk: awk 'xor(/pattern1/,/pattern2/)' or for versions of awk that don't have xor : awk '/pattern1/+/pattern2/==1`
{ "source": [ "https://unix.stackexchange.com/questions/497674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334246/" ] }
497,706
I thought user=user&password=password will be in the body of the request, but I can't find it. Where is it? Does -v show the complete request including the body? Thanks. $ curl --data "user=user&password=password" -v http://google.com/ * Trying 172.217.3.110... * TCP_NODELAY set * Connected to google.com (172.217.3.110) port 80 (#0) > POST / HTTP/1.1 > Host: google.com > User-Agent: curl/7.58.0 > Accept: */* > Content-Length: 27 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 27 out of 27 bytes < HTTP/1.1 405 Method Not Allowed < Allow: GET, HEAD < Date: Wed, 30 Jan 2019 14:01:40 GMT < Content-Type: text/html; charset=UTF-8 < Server: gws < Content-Length: 1589 < X-XSS-Protection: 1; mode=block < X-Frame-Options: SAMEORIGIN < <!DOCTYPE html> <html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 405 (Method Not Allowed)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>405.</b> <ins>That’s an error.</ins> <p>The request method <code>POST</code> is inappropriate for the URL <code>/</code>. <ins>That’s all we know.</ins> * Connection #0 to host google.com left intact
The description of curl ’s -v option says -v , --verbose Makes curl verbose during the operation. Useful for debugging and seeing what's going on "under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data" received by curl that is hidden in normal cases, and a line starting with '*' means additional info provided by curl. If you only want HTTP headers in the output, -i , --include might be the option you're looking for. If you think this option still doesn't give you enough details, consider using --trace or --trace-ascii instead. So -v shows headers (in addition to the response body, which curl shows anyway), and you need --trace to see the bodies: curl --data "user=user&password=password" --trace google.log http://google.com/ will output detailed logs in google.log .
{ "source": [ "https://unix.stackexchange.com/questions/497706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
497,897
By default, i3 ignores the Print Screen key available in most keyboards. How can it be activated?
Everything inside i3 needs to be bound and just a minimal set of keys is added/generated inside the default config. Some keys that are not letters can be represented with its keycodes or keysyms. More about this subject here: i3 User’s Guide - 4.3. Keyboard bindings Printscreen is the Print keysym. I personally use gnome-screenshot to that task, since it can crop images, making life easier. Add the following lines to your .config/i3/config or any config file you are using as the i3wm main config file. #interactive screenshot by pressing printscreen bindsym Print exec gnome-screenshot -i #crop-area screenshot by pressing Mod + printscreen bindsym $mod+Print exec gnome-screenshot -a Some people like to use scrot . That is up to you to decide :) . Example: bindsym Print exec scrot $HOME/Images/`date +%Y-%m-%d_%H:%M:%S`.png
{ "source": [ "https://unix.stackexchange.com/questions/497897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56887/" ] }
499,792
In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses? if a hostname can be resolved in /etc/hosts , does DNS apply after /etc/hosts to resolve the hostname or treat the resolved IP address by /etc/hosts as a "hostname" to resolve recursively? In my browser (firefox and google chrome), when I add to /etc/hosts : 127.0.0.1 google.com www.google.com typing www.google.com into the address bar of the browsers and hitting entering won't connect to the website. After I remove that line from /etc/hosts , I can connect to the website. Does it mean that /etc/hosts overrides DNS for resolving hostnames? After I re-add the line to /etc/hosts , I can still connect to the website, even after refreshing the webpage. Why doesn't /etc/hosts apply again, so that I can't connect to the website? Thanks.
This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system: hosts: files mdns4_minimal [NOTFOUND=return] dns Here, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins . Also, see man 5 nsswitch.conf to get more idea on this. As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.: getent hosts example.com
{ "source": [ "https://unix.stackexchange.com/questions/499792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
499,840
I teach an Intro to UNIX/Linux course at a local college and one of my students asked the following question: Why are some of the files in my directory colored white and others are gray? Are the white ones the ones I created today and the gray are existing files? As I looked into this I first thought the answer would be in the LS_COLORS variable, but further investigation revealed that the color listings were different when using the -l switch versus the -al switch with the ls command. See the following screen shots: Using ls -l the file named '3' shows as white but using the -al switch the same file shows a gray. Is this a bug in ls or does anyone know why this is happening?
It looks as if your prompt-string ( $PS1 ) is setting the bold attribute on characters to make the colors nicer, and not unsetting it. The output from ls doesn't know about this, and does unset bold. So after the first color output of ls , everything looks dimmer.
{ "source": [ "https://unix.stackexchange.com/questions/499840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/336198/" ] }
499,897
I have this regex: (?<=prefix).*$ which returns any character following string "prefix" and it works fine on any online regex engines (e.g. https://regex101.com ). The problem is when I use that regex in bash: grep '(?<=prefix).*$' <<< prefixSTRING it does not match anything. Why that regex does not work with grep?
You seem to have defined the right regex, but not set the sufficient flags in command-line for grep to understand it. Because by default grep supports BRE and with -E flag it does ERE. What you have (look-aheads) are available only in the PCRE regex flavor which is supported only in GNU grep with its -P flag. Assuming you need to extract only the matching string after prefix you need to add an extra flag -o to let know grep that print only the matching portion as grep -oP '(?<=prefix).*$' <<< prefixSTRING There is also a version of grep that supports PCRE libraries by default - pcregrep in which you can just do pcregrep -o '(?<=prefix).*$' <<< prefixSTRING Detailed explanation on various regex flavors are explained in this wonderful Giles' answer and tools that implement each of them
{ "source": [ "https://unix.stackexchange.com/questions/499897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/336253/" ] }
499,901
I have a following bash script and it is being executed using popen(/path/to/script, 'r+') from C code and saving result back (Ex: DA00000); for this, i need to export a bash script file. I should not use a seperate bash script. How to write below script in a single line itself as command? So, i can use this command directly ( Ex:fp = popen(command "r")) in C code. #!/bin/sh url="$(grep 0x017a /sys/bus/pci/devices/*/device)" addr="$(echo $url | cut -d/ -f6)" str="$(head -n1 /sys/bus/pci/devices/$addr/resource | cut -d ' ' -f 1)" result="${str:${#str} - 8}" echo $result
You seem to have defined the right regex, but not set the sufficient flags in command-line for grep to understand it. Because by default grep supports BRE and with -E flag it does ERE. What you have (look-aheads) are available only in the PCRE regex flavor which is supported only in GNU grep with its -P flag. Assuming you need to extract only the matching string after prefix you need to add an extra flag -o to let know grep that print only the matching portion as grep -oP '(?<=prefix).*$' <<< prefixSTRING There is also a version of grep that supports PCRE libraries by default - pcregrep in which you can just do pcregrep -o '(?<=prefix).*$' <<< prefixSTRING Detailed explanation on various regex flavors are explained in this wonderful Giles' answer and tools that implement each of them
{ "source": [ "https://unix.stackexchange.com/questions/499901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335799/" ] }
500,549
I have a file whose content is similar to the following one. 0 0 0.2 0 0 0 0 I need to remove all the lines with a single zero. I was thinking to use grep -v "0" , but this removes also the line containing 0.2. I saw I could use the -w option, but this doesn't seem to work either. How can I remove all the lines containing just a single 0 and keep all those lines starting with a 0?
grep -vx 0 From man grep : -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. -w fails because the first 0 in 0.02 is considered a "word", and hence this line is matched. This is because it is followed by a "non-word" character. You can see this if you run the original command without -v , i.e. grep -w "0" .
{ "source": [ "https://unix.stackexchange.com/questions/500549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
500,572
I want to create random unique numbers (UUIDs) as the following node.id=ffffffff-ffff-ffff-ffff-ffffffffffff First I tried this $ rndnum=` echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM` $ echo $rndnum 30380-echo 21875-echo 14791-echo 32193-echo 11503 What is the right way to create the following (where f is any number)? ffffffff-ffff-ffff-ffff-ffffffffffff
On Linux, the util-linux / util-linux-ng package offers a command to generate UUIDs: uuidgen . $ uuidgen 5528f550-6559-4d61-9054-efb5a16a4de0 To quote the manual : The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid (3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options. Addendum: The OP had provided a link in the comments to the documentation for Presto DB . After a bit of searching, I found this related discussion where it is explicitly mentioned that the node.id property is indeed a UUID. Adding the information provided by frostschutz in a comment: As an alternative to the uuidgen / libuuid approach, you can make use of an interface exposed by the Linux kernel itself to generate UUIDs: $ cat /proc/sys/kernel/random/uuid 00db2531-365c-415c-86f7-503a35fafa58 The UUID is re-generated on each request.
{ "source": [ "https://unix.stackexchange.com/questions/500572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
500,578
Example on hyper terminal: I press alt + h , the script should check if hyper is already running in the background. If yes, it should open already the opened window. If not, open a new window. OS: Ubuntu 18.04
On Linux, the util-linux / util-linux-ng package offers a command to generate UUIDs: uuidgen . $ uuidgen 5528f550-6559-4d61-9054-efb5a16a4de0 To quote the manual : The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid (3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options. Addendum: The OP had provided a link in the comments to the documentation for Presto DB . After a bit of searching, I found this related discussion where it is explicitly mentioned that the node.id property is indeed a UUID. Adding the information provided by frostschutz in a comment: As an alternative to the uuidgen / libuuid approach, you can make use of an interface exposed by the Linux kernel itself to generate UUIDs: $ cat /proc/sys/kernel/random/uuid 00db2531-365c-415c-86f7-503a35fafa58 The UUID is re-generated on each request.
{ "source": [ "https://unix.stackexchange.com/questions/500578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/336846/" ] }
500,626
I'm trying to experiment with shared objects and found the below snippet on http://www.gambas-it.org/wiki/index.php?title=Creare_una_Libreria_condivisa_(Shared_Library)_.so gcc -g -shared -Wl,-soname,libprimo.so.0 -o libprimo.so.0.0 primo.o -lc I browsed trough the manpages and online, but I didn't find what the -lc switch does, can someone tell me?
The option is shown as " -l_library_ " (no space) or " -l _library_ " (with a space) and c is the library argument, see https://linux.die.net/man/1/gcc -lc will link libc ( -lfoobar would link libfoobar etc.) General information about options and arguments UNIX commands often accept option arguments with or without whitespace. If you have an option o which takes an argument arg you can write -o arg or -oarg . On the other hand you can combine options that don't take an argument, e.g. -a -b -c or -abc . When you see -lc you can only find out from the documentation (man page) if this is the combination of options -l and -c or option -l with argument c or a single option -lc . See also https://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html Note: gcc is an exception from this general concept. You cannot combine options for gcc .
{ "source": [ "https://unix.stackexchange.com/questions/500626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335248/" ] }
501,128
In my terminal it printed out a seemingly random number 127 . I think it is printing some variable's value and to check my suspicion, I defined a new variable v=4 . Running echo $? again gave me 0 as output. I'm confused as I was expecting 4 to be the answer.
From man bash : $? Expands to the exit status of the most recently executed foreground pipeline. echo $? will return the exit status of last command. You got 127 that is the exit status of last executed command exited with some error (most probably). Commands on successful completion exit with an exit status of 0 (most probably). The last command gave output 0 since the echo $v on the line previous finished without an error. If you execute the commands v=4 echo $v echo $? You will get output as: 4 (from echo $v) 0 (from echo $?) Also try: true echo $? You will get 0 . false echo $? You will get 1 . The true command does nothing, it just exits with a status code 0 ; and the false command also does nothing, it just exits with a status code indicating failure (i.e. with status code 1 ).
{ "source": [ "https://unix.stackexchange.com/questions/501128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291388/" ] }
501,153
This has been happening for a while now about a month. I thought it would be fix with new updates but it didn't. The file /var/log/Xorg.0.log.old has this last few lines before the crash [574.086] (II) NVIDIA(GPU-0): Deleting GPU-0 [574.087] (WW) xf86CloseConsole: KDSETMODE failed: Input/output error [574.087] (WW) xf86CloseConsole: VT_GETMODE failed: Input/output error [574.087] (WW) xf86CloseConsole: VT_ACTIVATE failed: Input/output error [574.088] (II) Server terminated successfully (0). Closing log file. I tried opening up the virtual terminal but won't let me type (It's stuck) So how can I fix it?
From man bash : $? Expands to the exit status of the most recently executed foreground pipeline. echo $? will return the exit status of last command. You got 127 that is the exit status of last executed command exited with some error (most probably). Commands on successful completion exit with an exit status of 0 (most probably). The last command gave output 0 since the echo $v on the line previous finished without an error. If you execute the commands v=4 echo $v echo $? You will get output as: 4 (from echo $v) 0 (from echo $?) Also try: true echo $? You will get 0 . false echo $? You will get 1 . The true command does nothing, it just exits with a status code 0 ; and the false command also does nothing, it just exits with a status code indicating failure (i.e. with status code 1 ).
{ "source": [ "https://unix.stackexchange.com/questions/501153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/337350/" ] }
501,828
In a bash script I need various values from /proc/ files. Until now I have dozens of lines grepping the files directly like that: grep -oP '^MemFree: *\K[0-9]+' /proc/meminfo In an effort to make that more efficient I saved the file content in a variable and grepped that: a=$(</proc/meminfo) echo "$a" | grep -oP '^MemFree: *\K[0-9]+' Instead of opening the file multiple times this should just open it once and grep the variable content, which I assumed would be faster – but in fact it is slower: bash 4.4.19 $ time for i in {1..1000};do grep ^MemFree /proc/meminfo;done >/dev/null real 0m0.803s user 0m0.619s sys 0m0.232s bash 4.4.19 $ a=$(</proc/meminfo) bash 4.4.19 $ time for i in {1..1000};do echo "$a"|grep ^MemFree; done >/dev/null real 0m1.182s user 0m1.425s sys 0m0.506s The same is true for dash and zsh . I suspected the special state of /proc/ files as a reason, but when I copy the content of /proc/meminfo to a regular file and use that the results are the same: bash 4.4.19 $ cat </proc/meminfo >meminfo bash 4.4.19 $ time for i in $(seq 1 1000);do grep ^MemFree meminfo; done >/dev/null real 0m0.790s user 0m0.608s sys 0m0.227s Using a here string to save the pipe makes it slightly faster, but still not as fast as with the files: bash 4.4.19 $ time for i in $(seq 1 1000);do <<<"$a" grep ^MemFree; done >/dev/null real 0m0.977s user 0m0.758s sys 0m0.268s Why is opening a file faster than reading the same content from a variable?
Here, it's not about opening a file versus reading a variable's content but more about forking an extra process or not. grep -oP '^MemFree: *\K[0-9]+' /proc/meminfo forks a process that executes grep that opens /proc/meminfo (a virtual file, in memory, no disk I/O involved) reads it and matches the regexp. The most expensive part in that is forking the process and loading the grep utility and its library dependencies, doing the dynamic linking, open the locale database, dozens of files that are on disk (but likely cached in memory). The part about reading /proc/meminfo is insignificant in comparison, the kernel needs little time to generate the information in there and grep needs little time to read it. If you run strace -c on that, you'll see the one open() and one read() systems calls used to read /proc/meminfo is peanuts compared to everything else grep does to start ( strace -c doesn't count the forking). In: a=$(</proc/meminfo) In most shells that support that $(<...) ksh operator, the shell just opens the file and read its content (and strips the trailing newline characters). bash is different and much less efficient in that it forks a process to do that reading and passes the data to the parent via a pipe. But here, it's done once so it doesn't matter. In: printf '%s\n' "$a" | grep '^MemFree' The shell needs to spawn two processes, which are running concurrently but interact between each other via a pipe. That pipe creation, tearing down, and writing and reading from it has some little cost. The much greater cost is the spawning of an extra process. The scheduling of the processes has some impact as well. You may find that using the zsh <<< operator makes it slightly quicker: grep '^MemFree' <<< "$a" In zsh and bash, that's done by writing the content of $a in a temporary file, that is less expensive than spawning an extra process, but will probably not give you any gain compared to getting the data straight off /proc/meminfo . That's still less efficient than your approach that copies /proc/meminfo on disk, as the writing of the temp file is done at each iteration. dash doesn't support here-strings, but its heredocs are implemented with a pipe that doesn't involve spawning an extra process. In: grep '^MemFree' << EOF $a EOF The shell creates a pipe, forks a process. The child executes grep with its stdin as the reading end of the pipe, and the parent writes the content at the other end of the pipe. But that pipe handling and process synchronisation is still likely to be more expensive than just getting the data straight off /proc/meminfo . The content of /proc/meminfo is short and takes not much time to produce. If you want to save some CPU cycles, you want to remove the expensive parts: forking processes and running external commands. Like: IFS= read -rd '' meminfo < /proc/meminfo memfree=${meminfo#*MemFree:} memfree=${memfree%%$'\n'*} memfree=${memfree#"${memfree%%[! ]*}"} Avoid bash though whose pattern matching is very ineficient. With zsh -o extendedglob , you can shorten it to: memfree=${${"$(</proc/meminfo)"##*MemFree: #}%%$'\n'*} Note that ^ is special in many shells (Bourne, fish, rc, es and zsh with the extendedglob option at least), I'd recommend quoting it. Also note that echo can't be used to output arbitrary data (hence my use of printf above).
{ "source": [ "https://unix.stackexchange.com/questions/501828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246819/" ] }
501,862
I have vim as default editor on my Mac and every time I run commands on Mac terminal, it automatically opens "vim". How can I set up "nano" instead and make sure the terminal will open "nano" every time is needed?
Set the EDITOR and VISUAL environment variables to nano . If you use bash , this is easiest done by editing your ~/.bashrc file and adding the two following lines: export EDITOR=nano export VISUAL="$EDITOR" to the bottom of the file. If the file does not exist, you may create it. Note that macOS users should probably modify the ~/.bash_profile file instead, as the abovementioned file is not used by default when starting a bash shell on this system. If you use some other shell, modify that shell's startup files instead (e.g. ~/.zshrc for zsh ). You should set both variables as some tools use one, and others may use the other. You will need to restart your terminal to have the changes take effect.
{ "source": [ "https://unix.stackexchange.com/questions/501862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/337902/" ] }
502,065
I have to copy files on a machine. And the data is immensely large. Now servers need to serve normally, and there are usually a particular range of busy hours on those. So is there a way to run such commands in a way that if server hits busy hours, it pauses process, and when it gets out of that range, it resumes it? Intended-Result cp src dst if time between 9:00-14:00 pause process After 14:00 resume cp command.
You can pause execution of a process by sending it a SIGSTOP signal and then later resume it by sending it a SIGCONT . Assuming your workload is a single process (doesn't fork helpers running in background), you can use something like this: # start copy in background, store pid cp src dst & echo "$!" >/var/run/bigcopy.pid Then when busy time starts, send it a SIGSTOP : # pause execution of bigcopy kill -STOP "$(cat /var/run/bigcopy.pid)" Later on, when the server is idle again, resume it. # resume execution of bigcopy kill -CONT "$(cat /var/run/bigcopy.pid)" You will need to schedule this for specific times when you want it executed, you can use tools such as cron or systemd timers (or a variety of other similar tools) to get this scheduled. Instead of scheduling based on a time interval, you might choose to monitor the server (perhaps looking at load average, CPU usage or activity from server logs) to make a decision of when to pause/resume the copy. You also need to manage the PID file (if you use one), make sure your copy is actually still running before pausing it, probably you'll want to clean up by removing the PID file once the copy is finished, etc. In other words, you need more around this to make a reliable, but the base idea of using these SIGSTOP and SIGCONT signals to pause/resume execution of a process seems to be what you're looking for.
{ "source": [ "https://unix.stackexchange.com/questions/502065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
502,540
This error shows up everytime I install Kali Linux, whenever I try to boot it. Then, it dissapears and the screen blacks out. The error is the following: +[drm:vmw_host_log [vmwgfx]] *ERROR* Failed to send host log message. Here's also an screenshot of the error:
Try to change display setting and check
{ "source": [ "https://unix.stackexchange.com/questions/502540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338462/" ] }
502,826
Quite often we see that the file we are trying to save in vim after edit is reported to be read-only. The way around this is to add !wq , i am trying to figure out what internally goes that allows the vim program to gain enough permission to write the read-only file ? Is there a internal flag which is switched or the vim temporarily gains the privileges for some time ?
When you do w! in Vim, what actually happens depends on who owns the file. If you (the current user) are the owner of the file, Vim will change the permissions to be writable before rewriting the file. It then removes the write permissions to restore the permission bits to what it was from the start. If you are not the owner of the file, but if you have write permissions in the current directory, Vim will delete the original file and write the document to a new file with the same name. The new file will then be assigned the same permissions as the original file, but will be owned by you. At no time does Vim gain elevated privileges to be able to write to the file. The mechanics described above are the available options that any program that needs to write to a read-only file has to choose from (i.e. either temporarily change the permission while writing to the file, or delete the file and create a new one), and what Vim ends up choosing to do may in the end may depend on a number of configurable settings. As seen in comments below, there is some confusion about the above. If you want to see for yourself what actually happens with your setup of Vim on your particular brand of Unix, I'd recommend tracing the system calls that Vim does while writing to a read-only file. How this is done depends on what Unix you are using. On Linux, this is likely done through e.g. strace vim file (then editing the file, saving it with w! and exiting). This is the first case (output from ktrace + kdump on OpenBSD): 13228 vim CALL chmod(0x19b1d94b4b10,0100644<S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH|S_IFREG>) 13228 vim NAMI "file" 13228 vim RET chmod 0 13228 vim CALL lseek(3,0x1000,SEEK_SET) 13228 vim RET lseek 4096/0x1000 13228 vim CALL write(3,0x19b1e0aa9000,0x1000) This changes permissions on the file so that it's writable (the S_IWUSR flag used with chmod() ) and writes the buffer to it. It then sets the original permissions: 13228 vim CALL fchmod(4,0100444<S_IRUSR|S_IRGRP|S_IROTH|S_IFREG>) 13228 vim RET fchmod 0 13228 vim CALL close(4) 13228 vim RET close 0 For the other case: It first unlinks (deletes) the file and then recreates it (before writing to the file and changing permissions later): 44487 vim CALL unlink(0x79fdbc1f000) 44487 vim NAMI "file" 44487 vim RET unlink 0 44487 vim CALL open(0x79fdbc1f000,0x201<O_WRONLY|O_CREAT>,0644<S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH>) 44487 vim NAMI "file" 44487 vim RET open 4
{ "source": [ "https://unix.stackexchange.com/questions/502826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73029/" ] }
503,312
I have an array of users who need to just upload files to their set homedirs. I think sftp would suffice, but I don't want them to login via shell. So is it possible? My platform is centos 7, user's homedirs are stored lets say /personal/$user I created user with these settings useradd -m -d /personal/user1 -s /sbin/nologin assigned user a passwd, then when I use sftp to login to the machine, it says cannot connect.
I like the following setup for managing SSH access, which I've used to manage a group of users on small fleets of servers. Security and ease of management is high on the list of my priorities. Its key features are easily managing SSH rights through Unix group membership, having tightly defined permissions, and being secure by default. Setting up Install software (optional but useful): yum install members # or apt install members Add groups: addgroup --system allowssh addgroup --system sftponly In /etc/ssh/sshd_config , ensure that the following to settings are No : PermitRootLogin no PubkeyAuthentication no PasswordAuthentication no And at the end of /etc/ssh/sshd_config , add these two stanzas: Match Group allowssh PubkeyAuthentication yes Match Group sftponly ChrootDirectory %h DisableForwarding yes ForceCommand internal-sftp (don't forget to restart SSH after editing the file) Explanation So, what does all this do? It always disables root logins, as an extra security measure. It always disables password-based logins (weak passwords are a big risk for servers running sshd). It only allows (pubkey) login for users in the allowssh group. Users in the sftponly group cannot get a shell over SSH, only SFTP. Managing who has access is then simply done by managing group membership (group membership changes take effect immediately, no SSH restart required; but note that existing sessions are not affected). members allowssh will show all users that are allowed to log in over SSH, and members sftponly will show all users that are limited to SFTP. # adduser marcelm allowssh # members allowssh marcelm # deluser marcelm allowssh # members allowssh # Note that your sftp users need to be members of both sftponly (to ensure they won't get a shell), and of allowssh (to allow login in the first place). Further information Please note that this configuration does not allow password logins ; all accounts need to use public key authentication. This is probably the single biggest security win you can get with SSH, so I argue it's worth the effort even if you have to start now. If you really don't want this, then also add PasswordAuthentication yes to the Match Group allowssh stanza. This will allow both pubkey and password auth for allowssh users. Alternatively, you can add another group (and Match Group stanza) to selectively grant users password-based logins. This configuration limits any sftponly user to their home directory. If you do not want that, remove the ChrootDirectory %h directive. If you do want the chrooting to work, it's important that the user's home directory (and any directory above it) is owned by root:root and not writable by group/other. It's OK for subdirectories of the home directory to be user-owned and/or writable. Yes, the user's home directory must be root-owned and unwritable to the user. Sadly, there are good reasons for this limitation. Depending on your situation, ChrootDirectory /home might be a good alternative. Setting the shell of the sftponly users to /sbin/nologin is neither necessary nor harmful for this solution, because SSH's ForceCommand internal-sftp overrides the user's shell. Using /sbin/nologin may be helpful to stop them logging in via other ways (physical console, samba, etc) though. This setup does not allow direct root logins over SSH; this forms an extra layer of security. If you really do need direct root logins, change the PermitRootLogin directive. Consider setting it to forced-commands-only , prohibit-password , and (as a last resort) yes . For bonus points, have a look at restricting who can su to root; add a system group called wheel , and add/enable auth required pam_wheel.so in /etc/pam.d/su .
{ "source": [ "https://unix.stackexchange.com/questions/503312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
503,320
I have a wonderful ISP blocking all UDP traffic (except DNS to its own DNS servers). However, I want to use UDP for my VPN solution. I have root over both VPN endpoints, and both of them are using Linux. My idea is to simply overwrite the packet type field in my outgoing UDP packets to look as TCP, and doing the reverse on the server side. Thus, the routers/firewall of my wonderful ISP will see bad TCP packets, while my VPN processes will be able to communicate on UDP. I strongly suspect, that the firewall of the ISP is not smart enough to detect that something is not okay. Of course it would be a dirty trick, but not more dirty than simply forbidding the second most used IP protocol and selling this as ordinary internet connection. As far I know, there are some iptables rules for that, but which one?
I like the following setup for managing SSH access, which I've used to manage a group of users on small fleets of servers. Security and ease of management is high on the list of my priorities. Its key features are easily managing SSH rights through Unix group membership, having tightly defined permissions, and being secure by default. Setting up Install software (optional but useful): yum install members # or apt install members Add groups: addgroup --system allowssh addgroup --system sftponly In /etc/ssh/sshd_config , ensure that the following to settings are No : PermitRootLogin no PubkeyAuthentication no PasswordAuthentication no And at the end of /etc/ssh/sshd_config , add these two stanzas: Match Group allowssh PubkeyAuthentication yes Match Group sftponly ChrootDirectory %h DisableForwarding yes ForceCommand internal-sftp (don't forget to restart SSH after editing the file) Explanation So, what does all this do? It always disables root logins, as an extra security measure. It always disables password-based logins (weak passwords are a big risk for servers running sshd). It only allows (pubkey) login for users in the allowssh group. Users in the sftponly group cannot get a shell over SSH, only SFTP. Managing who has access is then simply done by managing group membership (group membership changes take effect immediately, no SSH restart required; but note that existing sessions are not affected). members allowssh will show all users that are allowed to log in over SSH, and members sftponly will show all users that are limited to SFTP. # adduser marcelm allowssh # members allowssh marcelm # deluser marcelm allowssh # members allowssh # Note that your sftp users need to be members of both sftponly (to ensure they won't get a shell), and of allowssh (to allow login in the first place). Further information Please note that this configuration does not allow password logins ; all accounts need to use public key authentication. This is probably the single biggest security win you can get with SSH, so I argue it's worth the effort even if you have to start now. If you really don't want this, then also add PasswordAuthentication yes to the Match Group allowssh stanza. This will allow both pubkey and password auth for allowssh users. Alternatively, you can add another group (and Match Group stanza) to selectively grant users password-based logins. This configuration limits any sftponly user to their home directory. If you do not want that, remove the ChrootDirectory %h directive. If you do want the chrooting to work, it's important that the user's home directory (and any directory above it) is owned by root:root and not writable by group/other. It's OK for subdirectories of the home directory to be user-owned and/or writable. Yes, the user's home directory must be root-owned and unwritable to the user. Sadly, there are good reasons for this limitation. Depending on your situation, ChrootDirectory /home might be a good alternative. Setting the shell of the sftponly users to /sbin/nologin is neither necessary nor harmful for this solution, because SSH's ForceCommand internal-sftp overrides the user's shell. Using /sbin/nologin may be helpful to stop them logging in via other ways (physical console, samba, etc) though. This setup does not allow direct root logins over SSH; this forms an extra layer of security. If you really do need direct root logins, change the PermitRootLogin directive. Consider setting it to forced-commands-only , prohibit-password , and (as a last resort) yes . For bonus points, have a look at restricting who can su to root; add a system group called wheel , and add/enable auth required pam_wheel.so in /etc/pam.d/su .
{ "source": [ "https://unix.stackexchange.com/questions/503320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52236/" ] }
503,502
I know that Linux is available and has been ported for many different platforms such as for X86, ARM, PowerPC etc. However, in terms of porting, what is required exactly? My understanding is that Linux is software written in C. Therefore when porting Linux originally from X86 to ARM or others for example, is it not just a matter of re-compiling the code with the compiler for the specific target architecture? Putting device drivers for different peripherals aside, what else would need to be done when porting Linux to a new architecture. Does the compiler not take care of everything for us?
Even though most of the code in the Linux kernel is written in C, there are still many parts of that code that are very specific to the platform where it's running and need to account for that. One particular example of this is virtual memory, which works in similar fashion on most architectures (hierarchy of page tables) but has specific details for each architecture (such as the number of levels in each architecture, and this has been increasing even on x86 with introduction of new larger chips.) The Linux kernel code introduces macros to handle traversing these hierarchies that can be elided by the compiler on architectures which have fewer levels of page tables (so that code is written in C, but takes details of the architecture into consideration.) Many other areas are very specific to each architecture and need to be handled with arch-specific code. Most of these involve code in assembly language though. Examples are: Context Switching : Context switching involves saving the value of all registers for the process being switched out and restoring the registers from the saved set of the process scheduled into the CPU. Even the number and set of registers is very specific to each architecture. This code is typically implemented in assembly, to allow full access to the registers and also to make sure it runs as fast as possible, since performance of context switching can be critical to the system. System Calls : The mechanism by which userspace code can trigger a system call is usually specific to the architecture (and sometimes even to the specific CPU model, for instance Intel and AMD introduced different instructions for that, older CPUs might lack those instructions, so details for those will still be unique.) Interrupt Handlers : Details of how to handle interrupts (hardware interrupts) are usually platform-specific and usually require some assembly-level glue to handle the specific calling conventions in use for the platform. Also, primitives for enabling/disabling interrupts are usually platform-specific and require assembly code as well. Initialization : Details of how initialization should happen also usually include details that are specific to the platform and often require some assembly code to handle the entry point to the kernel. On platforms that have multiple CPUs (SMP), details on how to bring other CPUs online are usually platform-specific as well. Locking Primitives : Implementation of locking primitives (such as spinlocks) usually involve platform-specific details as well, since some architectures provide (or prefer) different CPU instructions to efficiently implement those. Some will implement atomic operations, some will provide a cmpxchg that can atomically test/update (but fail if another writer got in first), others will include a "lock" modifier to CPU instructions. These will often involve writing assembly code as well. There are probably other areas where platform- or architecture-specific code is needed in a kernel (or, specifically, in the Linux kernel.) Looking at the kernel source tree, there are architecture-specific subtrees under arch/ and under include/arch/ where you can find more examples of this. Some are actually surprising, for instance you'll see that the number of system calls available on each architecture is distinct and some system calls will exist in some architectures and not others. (Even on x86, the list of syscalls differs between a 32-bit and a 64-bit kernel.) In short, there's plenty of cases a kernel needs to be aware that are specific to a platform. The Linux kernel tries to abstract most of those, so higher-level algorithms (such as how memory management and scheduling works) can be implemented in C and work the same (or mostly the same) on all architectures.
{ "source": [ "https://unix.stackexchange.com/questions/503502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323971/" ] }
503,601
We want to calculate the first numbers that we get from du du -b /tmp/* 6 /tmp/216c6f99-6671-4865-b8bc-7205f5388752_resources 668669 /tmp/hadoop7887078727316788325.tmp 6 /tmp/hadoop-hdfs 42456 /tmp/hive 32786 /tmp/hsperfdata_hdfs 6 /tmp/hsperfdata_hive 32786 /tmp/hsperfdata_root 262244 /tmp/hsperfdata_yarn so final sum will be sum=6+668669+6+42456+32786+6+32786+262244 echo $sum How we can do it by awk or perl one liners?
In AWK: { sum += $1 } END { print sum } So du -b /tmp/* | awk '{ sum += $1 } END { print sum }' Note that the result won’t be correct if the directories under /tmp have subdirectories themselves, because du produces running totals on directories and their children. du -s will calculate the sum for you correctly (on all subdirectories and files in /tmp , including hidden ones): du -sb /tmp and du -c will calculate the sum of the listed directories and files, correctly too: du -cb /tmp/*
{ "source": [ "https://unix.stackexchange.com/questions/503601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
504,063
At some point, in some teaching material (from Linux Foundation) on Linux that I came across, the following is mentioned: ip command is more versatile and more efficient than ifconfig because it uses netlink sockets rather than ioctl system calls. Can anyone elaborate a bit on this because I cannot understand what's going on under the hood? P.S. I am aware of this topic on those tools but it does not address this specific difference on how they operate
The ifconfig command on operating systems such as FreeBSD and OpenBSD was updated in line with the rest of the operating system. It nowadays can configure all sorts of network interface settings on those operating systems, and handle a range of network protocols. The BSDs provide ioctl() support for these things. This did not happen in the Linux world. There are, today, three ifconfig commands: ifconfig from GNU inetutils jdebp % inetutils-ifconfig -l enp14s0 enp15s0 lo jdebp % inetutils-ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Bcast:0.0.0.0 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:9087 errors:0 dropped:0 overruns:0 frame:0 TX packets:9087 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:51214341 TX bytes:51214341 jdebp % ifconfig from NET-3 net-tools jdebp % ifconfig -l ifconfig: option -l' not recognised. ifconfig: --help' gives usage information. jdebp % ifconfig lo lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> inet6 ::2 prefixlen 128 scopeid 0x80<compat,global> inet6 fe80:: prefixlen 10 scopeid 0x20<link> loop txqueuelen 1000 (Local Loopback) RX packets 9087 bytes 51214341 (48.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9087 bytes 51214341 (48.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 jdebp % ifconfig from (version 1.40 of) the nosh toolset jdebp % ifconfig -l enp14s0 enp15s0 lo jdebp % ifconfig lo lo link up loopback running link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00 inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1 inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255 inet6 address ::2 scope 0 prefixlen 128 inet6 address fe80:: scope 1 prefixlen 10 inet6 address ::1 scope 0 prefixlen 128 jdebp % sudo ifconfig lo inet4 127.1.0.2 alias jdebp % sudo ifconfig lo inet6 ::3/128 alias jdebp % ifconfig lo lo link up loopback running link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00 inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1 inet4 address 127.1.0.2 prefixlen 32 bdaddr 127.1.0.2 inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255 inet6 address ::3 scope 0 prefixlen 128 inet6 address ::2 scope 0 prefixlen 128 inet6 address fe80:: scope 1 prefixlen 10 inet6 address ::1 scope 0 prefixlen 128 jdebp % As you can see, the GNU inetutils and NET-3 net-tools ifconfig s have some marked deficiencies, with respect to IPv6, with respect to interfaces that have multiple addresses, and with respect to functionality like -l . The IPv6 problem is in part some missing code in the tools themselves. But in the main it is caused by the fact that Linux does not (as other operating systems do) provide IPv6 functionality through the ioctl() interface. It only lets programs see and manipulate IPv4 addresses through the networking ioctl() s. Linux instead provides this functionality through a different interface, send() and recv() on a special, and somewhat odd, address family of sockets, AF_NETLINK . The GNU and NET-3 ifconfig s could have been adjusted to use this new API. The argument against doing so was that it was not portable to other operating systems, but these programs were in practice already not portable anyway so that was not much of an argument. But they weren't adjusted, and remain as aforeshown to this day. (Some people worked on them at various points over the years, but the improvements, sad to say, never made it into the programs. For example: Bernd Eckenfels never accepted a patch that added some netlink API capability to NET-3 net-tools ifconfig , 4 years after the patch had been written.) Instead, some people completely reinvented the toolset as an ip command, which used the new Linux API, had a different syntax, and combined several other functions behind a fashionable command subcommand -style interface. I needed an ifconfig that had the command-line syntax and output style of the FreeBSD ifconfig (which neither the GNU nor the NET-3 ifconfig has, and which ip most certainly does not have). So I wrote one. As proof that one could write an ifconfig that uses the netlink API on Linux, it does. So the received wisdom about ifconfig , such as what you quote, is not really true any more. It is now untrue to say that " ifconfig does not use netlink.". The blanket that covered two does not cover three. It has always been untrue to say that "netlink is more efficient". For the tasks that one does with ifconfig , there isn't really much in it when it comes to efficiency between the netlink API and the ioctl() API. One makes pretty much the same number of API calls for any given task. Indeed, each API call is two system calls in the netlink case, as opposed to one in the ioctl() system. And arguably the netlink API has the disadvantage that on a heavily-used system it explicitly incorporates the possibility of the tool never receiving an acknowledgement message informing it of the result of the API call. It is, furthermore, untrue to say that ip is "more versatile" than the GNU and NET-3 ifconfig s because it uses netlink . It is more versatile because it does more tasks, doing things in one big program that one would do with separate programs other than ifconfig . It is not more versatile simply by dint of the API that it uses internally for performing those extra tasks. There's nothing inherent to the API about this. One could write an all-in-one tool that used the FreeBSD ioctl() API, for example, and equally well state that it is "more versatile" than the individual ifconfig , route , arp , and ndp commands. One could write route , arp , and ndp commands for Linux that used the netlink API, too. Further reading Jonathan de Boyne Pollard (2019). ifconfig . nosh Guide . Softwares. Eduardo Ferro (2009-04-16). ifconfig: reports wrong ip address / initial patch . Debian bug #359676.
{ "source": [ "https://unix.stackexchange.com/questions/504063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136100/" ] }
504,326
I have a text file called "shoplist.txt" which one have: drinks water cola fanta fruit banana orange And I want to get how many items per line I have. I'm able to extract drinks and fruit with function "cut" but how can I count how many words I have in each line? My actually code is: fileLine=`cat file.txt` #Here I get each line saving it to fileLine for line in $fileLine; do echo ((aux++)) done But this code dosen't work because it save to %fileLine each work (drinks, then water,then cola,...) How can I get the first line and then count the words on that line?
If you can use awk , NF is the number of fields in the current line (by default, a field is a word delimited by any amount of whitespace). Use awk '{ print NF, $0 }' inputfile With your sample input, this will print 4 drinks water cola fanta 3 fruit banana orange
{ "source": [ "https://unix.stackexchange.com/questions/504326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335761/" ] }
504,381
Somehow but not quite building upon the older question "ntpd vs. systemd-timesyncd - How to achieve reliable NTP syncing?" , I'd like to ask about the differences between chrony and systemd-timesyncd in terms of an NTP client . I know that systemd-timesyncd is a more or less minimal ntp client implementation whereas chrony is a full fledged NTP daemon solution that happens to include an NTP client. The ubuntu Bionic Beaver release notes state the following: For simple time sync needs the base system already comes with systemd-timesyncd. Chrony is only needed to act as a time server or if you want the advertised more accurate and efficient syncing. I like the idea of using a minimal preinstalled tool to do the job and I am pretty sure systemd-timesyncd will do the job for my use cases, still I am curious: What are the real world differences between the two in terms of accuracy? What are the differences in efficiency? What are a "non simple" time sync needs aka the use-cases for chrony as NTP client?
The announcement of systemd-timesyncd in the systemd NEWS file does a good job of explaining the differences of this tool in comparison with Chrony and tools like it. (emphasis mine): A new "systemd-timesyncd" daemon has been added for synchronizing the system clock across the network. It implements an SNTP client . In contrast to NTP implementations such as chrony or the NTP reference server this only implements a client side, and does not bother with the full NTP complexity, focusing only on querying time from one remote server and synchronizing the local clock to it . Unless you intend to serve NTP to networked clients or want to connect to local hardware clocks this simple NTP client should be more than appropriate for most installations. [...] This setup is a common use case for most hosts in a server fleet. They will usually get synchronized from local NTP servers, which themselves get synchronized from multiple sources, possibly including hardware. systemd-timesyncd tries to provide an easy-to-use solution for that common use case. Trying to address your specific questions: What are the real world differences between the two in terms of accuracy? I believe you can get higher accuracy by getting synchronization data from multiple sources, which is specifically not a supported use case for systemd-timesyncd. But when you're using it to get synchronization data from central NTP servers connected to your reliable internal network, using multiple sources isn't really that relevant and you get good accuracy from a single source. If you're synchronizing your server from a trusted server in a local network and in the same datacenter , the difference in accuracy between NTP and SNTP will be virtually non-existent. NTP can take RTT into account and do timesmearing, but that's not that beneficial when your RTT is really small, which is the case of a fast local network and a nearby machine. You also don't need multiple sources if you can trust the one you're using. What are the differences in efficiency? Getting synchronization from a single source is much simpler than getting it from multiple sources, since you don't have to make decisions about which sources are better than others and possibly combine information from multiple sources. The algorithms are much simpler and will require less CPU load for the simple case. What are a "non simple" time sync needs aka the use-cases for chrony as NTP client? That's addressed in the quote above, but in any case these are use cases for Chrony that are not covered by systemd-timesyncd: running NTP server (so that other hosts can use this host as a source for synchrnoization); getting NTP synchronization information from multiple sources (which is important for hosts getting that information from public servers on the Internet); and getting synchronization information from the local clock, which usually involves specialized hardware such as GPS devices which can get accurate time information from satellites. These use cases require Chrony or ntpd or similar.
{ "source": [ "https://unix.stackexchange.com/questions/504381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87704/" ] }
504,444
There are JSON data which contains some numeric values. How to convert all numerics to strings? (wrap with quotes) Example: { "id":1, "customer":"user", "plate":"BMT-216-A", "country":"GB", "amount":1000, "pndNumber":20000, "zoneNumber":4 } should become { "id":"1", "customer":"user", "plate":"BMT-216-A", "country":"GB", "amount":"1000", "pndNumber":"20000", "zoneNumber":"4" }
$ jq 'map_values(tostring)' file.json { "id": "1", "customer": "user", "plate": "BMT-216-A", "country": "GB", "amount": "1000", "pndNumber": "20000", "zoneNumber": "4" } Redirect to a new file and then move that to the original filename. For a more thorough conversion of numbers in non-flat structures into strings, consider jq '(..|select(type == "number")) |= tostring' file.json This would examine every value recursively in the given document, and select the ones that are numbers. The selected values are then converted into strings. It would also, strictly speaking, look at the keys, but since these can't be plain numbers in JSON, no key would be selected. Example: $ jq . file.json { "a": { "b": 1 }, "b": null, "c": [ 1, 2, "hello", 4 ] } $ jq '(..|select(type == "number")) |= tostring' file.json { "a": { "b": "1" }, "b": null, "c": [ "1", "2", "hello", "4" ] } To additionally quote the null , change the select() to select(type == "number" or type == "null")
{ "source": [ "https://unix.stackexchange.com/questions/504444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/329780/" ] }
505,629
In the current situation, a certain script 'calling.sh' launches another script 'called.sh' in the background, performs other operations, sleeps for a while, and then terminates 'called.sh' with a pkill called.sh . This works fine. Then, I would also like to launch 'called.sh' from other terminals as a standalone script at any other time, whether before or after launching calling.sh. These independent instances should not be killed by 'calling.sh'. How can I achieve this? Intuition says that the calling script should be able to tell the process it started from any other namesakes that are running in the meantime. As a variant, 'calling.sh' may also launch 'called' which is a symbolic link to 'called.sh'. Does this complicate managing the above situation? Which specific cautions and adjustments does using a symbolic link require?
Don't use the name to kill it. Since the calling.sh script is calling the process you later want to kill, just use $! (from man bash ): ! Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin So, if you're calling.sh is like this: called.sh & ## do stuff pkill called.sh Change it to this: called.sh & calledPid=$! # do stuff kill "$calledPid"
{ "source": [ "https://unix.stackexchange.com/questions/505629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132913/" ] }
505,828
Suppose a program cook takes one argument: the pathname of a text file containing the recipe of the food to cook. Suppose I wish to call this program from within a bash script, also suppose I already have the recipe in a string variable: #!/bin/bash the_recipe="$(cat << EOF wash cucumbers wash knife slice cucumbers EOF )" cook ... # What should I do here? It expects a file, but I only have a string. How can I pass the recipe to the command when it expects a filename argument? I thought about creating a temporary file just for the purpose passing a file, but I wish to know if there are alternative ways to solve this problem.
You can use the "fake" filename /dev/stdin which represents the standard input. So execute this: echo "$the_recipe" | cook /dev/stdin The echo command and the pipe sends the contents of the specified variable to the standard input of the next command cook , and that opens the standard input (as a separate file descriptor) and reads that.
{ "source": [ "https://unix.stackexchange.com/questions/505828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217242/" ] }
506,347
I have read what is multi-user.target and the systemd documentation , which states that the multi-user.target is a special target. Further, a lot of the systemd examples contain that line. Why do so many example services contain that line? What would happen if they did not contain WantedBy=multi-user.target? Could you give me an example of when it would actually be advisable to not include that line in a service file definition? Along the same lines, when is it a good idea to keep that line in?
1.) multi-user.target is basically the closest equivalent of classic SysVinit runlevel 3 that systemd has. When a systemd system boots up, systemd is trying to make the system state match the state specified by default.target - which is usually an alias for either graphical.target or multi-user.target . multi-user.target normally defines a system state where all network services are started up and the system will accept logins, but a local GUI is not started. This is the typical default system state for server systems, which might be rack-mounted headless systems in a remote server room. graphical.target is another possible alias for default.target . Normally it's defined as a superset of the multi-user.target : it includes everything the multi-user.target does, plus the activation of a local GUI login. So kind of like runlevel 5 in classic SysVinit. The line WantedBy=multi-user.target in a service is essentially the same as specifying "this service should start in runlevels 3, 4 and 5" in SysVinit systems: it tells systemd that this service should be started as part of normal system start-up, whether or not a local GUI is active. However, WantedBy is separate from the enabled/disabled state: so in another sense, it's sort of a "preset": it determines under what conditions the automatic start may happen, but only when the service is enabled in the first place. 2.) if you omit the WantedBy=multi-user.target line and no other enabled service includes a Requires=your.service or Wants=your.service in its service definition, your service will not be started automatically. systemd works on dependencies, and at boot time, if nothing Requires or Wants your service, it won't be started even if the service is enabled. Sure, you could edit your default.target to add or delete Requires or Wants lines for any services you want started at boot time - but so that you can just drop a new service file into the system and have it work by default (which makes things very easy for software package managers), systemd has the WantedBy and RequiredBy keywords which can be used to insert Wants and Requires -type dependencies (respectively) from "the other end". 3.) You should omit the line if you don't want the service to be ever started automatically at boot time, or this service is a part of a chain of dependencies you've defined explicitly. For example, you might be refactoring server application A and for some reason or another decide to split some optional functionality off it into a separate service B, to allow the user the choice of not installing it if it isn't needed. You could then make service B a separate service-B.rpm , and define B.service with WantedBy=A.service to make systemd start up service B automatically whenever service A is started - but only when service-B.rpm is actually installed. Note that a Wants or WantedBy only says that the system should startup one service whenever another service or target is also started, but it specifies nothing at all about the startup/shutdown order. If you need service B to be already running when service A starts up, you'd need to add Before=A.service in the B.service file to explicitly specify the start-up order dependency. 4.) Anytime you do want the service to have the capability of being started automatically at boot time, and there are no other dependencies already defined.
{ "source": [ "https://unix.stackexchange.com/questions/506347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7684/" ] }
507,011
I use a ProxyJump command for a number of ssh sessions I use daily, and also switch users a lot on these sessions and having to type exit 3 or 4 times in a row isn't too fun. I am aware of newline + ~ + . to terminate an ssh session, I still have to check if it terminates it amicably like an exit would, but how do you exit all sessions in the current shell with a single command or shortcut such that typing exit 3 or 4 times in my case becomes a one-time thing?
Ctrl - D will exit a shell in many cases. It is quicker than typing exit Enter . It's still not a single command to terminate everything, but holding Ctrl and hitting D several times is easier and faster. Not sure how valuable this is for your use case. Discussed in detail here .
{ "source": [ "https://unix.stackexchange.com/questions/507011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32917/" ] }
507,021
I am having an error where the fields of my variable are not being detected when trying to build a configuration using a jinja2 template. This is to sync Linux repositories to yum and apt based systems from a golden source using ansible. Each repository configuration would go into a different file and the task updated with the variable name. Base system configs should be able to be put in one file using multiple uses of "-" then a list of attributes. I have reviewed: for loop in jinja2 https://omarkhawaja.com/accessing-ansible-variables-with-jinja2-loops/ https://stackoverflow.com/questions/25418158/templating-multiple-yum-repo-files-with-ansible-template-module as well as others that are less relevant to what I am doing. var file: --- repo: - name: google_chrome async: 1 url: http://dl.google.com/linux/chrome/rpm/stable/x86_6 ... include var task: - name: Include var into the 'chrome' variable. include_vars: file: google_chrome_repo.yaml name: chrome task to use template module: - name: generate config for Centos template: src: yum_template.j2 dest: "/etc/yum.repos.d/{{ item }}.repo" backup: yes with_items: - chrome when: - ansible_distribution == 'CentOS' template: {% for i in item %} [ {{ i.name }} ] async = {{ i.async }} baseurl = {{ i.url }} enabled = {{ i.repo_enable }} enablegroups = {{ i.pkggrp_enable }} failovermethod = {{ i.ha_method }} gpgkey = {{ i.gpgkey_url }} http_caching = {{ i.http_caching }} keepcache = {{ i.keepcache }} metadata_expire = {{ i.metadata_expire }} mirrorlist = {{ i.mirrorlist }} mirrorlist_expire = {{ i.mirrorlist_expire }} name = {{ i.descrip }} protect = {{ i.protect }} proxy = {{ i.proxy_config }} proxy_password = {{ i.proxy_username }} proxy_username = {{ i.proxy_password }} repo_gpgcheck = {{ i.repo_gpgcheck }} retries = {{ i.repo_retry_count }} s3_enabled = {{ i.s3_enabled }} sslverify = {{ i.ssl_verify }} timeout = {{ i.timeout }} {% endfor %} error: failed: [192.168.33.31] (item=chrome) => {"changed": false, "item": "chrome", "msg": "AnsibleUndefinedVariable: 'unicode object' has no attribute 'name'"} whichever attribute in role is first called by the jinja2 template will fail in this way. If I change the following so name isn't referenced and "i.name" becomes just "chrome" it will fail on async I can see the variable is imported ok: [192.168.33.31] => {"ansible_facts": {"chrome": {"repo": [{"async": 1, "descrip": "Google Chrome Repository", "gpgkey_url": "https://dl.google.com/linux/linux_signing_key.pub", "ha_method": "roundrobin", "http_caching": 1, "keepcache": 1, "metadata_expire": 21600, "mirrorlist": null, "mirrorlist_expire": 21600, "name": "google_chrome", "pkggrp_enable": 1, "protect": 0, "proxy_config": "__None__", "proxy_password": null, "proxy_username": null, "repo_enable": 1, "repo_gpgcheck": 1, "repo_retry_count": 10, "s3_enabled": 0, "ssl_verify": 1, "timeout": 1, "url": "http://dl.google.com/linux/chrome/rpm/stable/x86_6"}]}}, "ansible_included_var_files": ["/var/lib/awx/projects/_6__trowe/playbooks/roles/Manage_Linux_Repos/vars/google_chrome_repo.yaml"], "changed": false} I do see it says "unicode" variable where I would expect it to be a dict. I have also tried with_dict and the error says that the variable is not a dictionary. However if I structure the variable file without "repo:", it will error saying it was not passed a dictionary object...
Ctrl - D will exit a shell in many cases. It is quicker than typing exit Enter . It's still not a single command to terminate everything, but holding Ctrl and hitting D several times is easier and faster. Not sure how valuable this is for your use case. Discussed in detail here .
{ "source": [ "https://unix.stackexchange.com/questions/507021", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/342384/" ] }
507,131
Today I got this warning issued by OpenSSL in Cygwin after updating some packages, I believe openssl was included: *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. The OpenSSL version used in Cygwin was: OpenSSL 1.1.1b 26 Feb 2019 This happened while decrypting my Backup on BluRay, which I created on Linux Mint 19.1 , where the OpenSSL version is significantly older : OpenSSL 1.1.0g 2 Nov 2017 The command used to encrypt and decrypt (just add -d to the end) was: $ openssl enc -aes-256-cbc -md sha256 -salt -in "${InputFilePath}" -out "${OutputFilePath}" What does this warning mean and can I do anything to avoid it in the future backups?
Comparing the Synopsys of the two main and recent versions of OpenSSL, let me quote the man pages. OpenSSL 1.1.0 openssl enc -ciphername [-help] [-ciphers] [-in filename] [-out filename] [-pass arg] [-e] [-d] [-a/-base64] [-A] [-k password] [-kfile filename] [-K key] [-iv IV] [-S salt] [-salt] [-nosalt] [-z] [-md digest] [-p] [-P] [-bufsize number] [-nopad] [-debug] [-none] [-engine id] OpenSSL 1.1.1 openssl enc -cipher [-help] [-ciphers] [-in filename] [-out filename] [-pass arg] [-e] [-d] [-a] [-base64] [-A] [-k password] [-kfile filename] [-K key] [-iv IV] [-S salt] [-salt] [-nosalt] [-z] [-md digest] [-iter count] [-pbkdf2] [-p] [-P] [-bufsize number] [-nopad] [-debug] [-none] [-rand file...] [-writerand file] [-engine id] There obviously are some greater differences, namely considering this question, there are these two switches missing in the 1.1.0: pbkdf2 iter You have basically two options now. Either ignore the warning or adjust your encryption command to something like: openssl enc -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 -salt -in InputFilePath -out OutputFilePath Where these switches: -aes-256-cbc is what you should use for maximum protection or the 128-bit version, the 3DES (Triple DES) got abandoned some time ago, see Triple DES has been deprecated by NIST in 2017 , while AES gets accelerated by all modern CPUs by a lot; you can simply verify if your CPU has the AES-NI instruction set for example using grep aes /proc/cpuinfo ; win, win -md sha512 is the faster variant of SHA-2 functions family compared to SHA-256 while it might be a bit more secure; win, win -pbkdf2 : use PBKDF2 (Password-Based Key Derivation Function 2) algorithm -iter 100000 is overriding the default count of iterations (10000) for the password, quoting the man page: Use a given number of iterations on the password in deriving the encryption key. High values increase the time required to brute-force the resulting file. This option enables the use of PBKDF2 algorithm to derive the key.
{ "source": [ "https://unix.stackexchange.com/questions/507131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
507,441
I believe that the FAT32 file system does not support file permissions, however when I do ls -l on a FAT32 partition, ls -l shows that the files have permissions: -rw-r--r-- 1 john john 11 Mar 20 15:43 file1.txt -rw-r--r-- 1 john john 5 Mar 20 15:49 file2.txt Why is ls -l displaying the permissions of files?
The filesystem as stored on disk doesn't store all file permissions, but the filesystem driver has to provide them to the operating system since they are an integral part of the Unix filesystem concept and the system call interfaces have no way of presenting that the permissions are missing. Also consider what would happen if a file didn't have any permission bits at all? Would it be the same as 0777 , i.e. access to all; or the same as 0000 , i.e. no access to anyone? But both of those are file permissions, so why not show them? Or do something more useful and have a way to set some sensible permissions. So, the driver fakes some permissions, mostly same ones for all files. The permissions along with the files' owner and group are configurable at mount time. These are described under "Mount options for fat" in the mount(8) man page : Mount options for fat (Note: fat is not a separate filesystem, but a common part of the msdos, umsdos and vfat filesystems.) uid=value and gid=value Set the owner and group of all files. (Default: the UID and GID of the current process.) umask=value Set the umask (the bitmask of the permissions that are not present). The default is the umask of the current process. The value is given in octal. dmask=value Set the umask applied to directories only. The default is the umask of the current process. The value is given in octal. fmask=value Set the umask applied to regular files only. The default is the umask of the current process. The value is given in octal. Note that the one useful permission the FAT filesystems store is the read-only -bit, and if you run chmod ugo-w file , the read permissions on it will disappear. That's also probably the reason that the above options take their values as permissions to masks away , so fmask=0133 would result in all files having all the x permissions removed and w removed from the group and others. The files would then have the permissions 0644 / rw-r--r-- or 0444 / r--r--r-- , depending on if the read-only bit is cleared or set. Also, the defaults are inherited from the process calling mount() , so if you call mount from the command line, the shell's umask will apply.
{ "source": [ "https://unix.stackexchange.com/questions/507441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/342731/" ] }
507,837
My opinion is yes, it does, because all useful exposure to the outside world (non-privileged processor mode) would first require a process running in the outside world. That would require a file system, even a temporary, in-RAM, file system. Another engineer disagrees with me, but I can't seem to prove this beyond all (unknown to me) cases. Does the answer to this question depend on the definition of 'running'?
That's rather an odd question because you don't run the kernel like you run a program. The kernel is a platform to run programs on. Of course there is setup and shutdown code but it's not possible to run the kernel on its own. There must always be a main "init" process. And the kernel will panic if it's not there. If init tries to exit the kernel will also panic. These days the init process is something like systemd. If not otherwise specified the kernel will try to run a program from a list of locations starting with /sbin/init . See the init Param here http://man7.org/linux/man-pages/man7/bootparam.7.html in an emergency you can boot Linux with init=/bin/bash . But notice how you always specify a file on the file system to run. So the kernel will panic if it starts up an has no file system because without one there is no way to load init. Some confusion may arise because of an initialisation phase of the kernel. An initial ramdisk is loaded from an image on disk containing vital drivers and setup scripts. These are executed before the file system is loaded. But make no mistake the initial ramdisk is itself a file system. With an initial ramdisk /init is called (which is stored on the initial ramdisk). In many distributions it is ultimately this which calls /sbin/init . Again without a file system, this is impossible.
{ "source": [ "https://unix.stackexchange.com/questions/507837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44218/" ] }
508,221
my raspberry pi 3 model B, running Arch, has an issue with bluetooth. First of all: Bluetooth has worked flawlessly previously pi-bluetooth from the AUR is up to date bluez and bluez-utils are up to date The system is up to date as well (just ran pacman -Syu) Still, when I try to use the bluetooth interface, it doesn't work. bluetoothctl (as root), when I run "scan on", tells me Failed to start discovery: org.bluez.Error.NotReady wminput can't find the bluetooth interface: No Bluetooth interface found unable to connect "systemctl status bluetooth" has the following output: ● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-03-23 21:32:47 CET; 9min ago Docs: man:bluetoothd(8) Main PID: 2005 (bluetoothd) Status: "Running" Tasks: 1 (limit: 1404) CGroup: /system.slice/bluetooth.service └─2005 /usr/lib/bluetooth/bluetoothd Mar 23 21:32:47 media.lan systemd[1]: Starting Bluetooth service... Mar 23 21:32:47 media.lan bluetoothd[2005]: Bluetooth daemon 5.50 Mar 23 21:32:47 media.lan systemd[1]: Started Bluetooth service. Mar 23 21:32:47 media.lan bluetoothd[2005]: Starting SDP server Mar 23 21:32:47 media.lan bluetoothd[2005]: Bluetooth management interface 1.14 initialized I am at my wits end here, everything seems to be fine, yet nothing works. What is going on here?
Okay, wow, turns out all I had to do was run bluetoothctl power on
{ "source": [ "https://unix.stackexchange.com/questions/508221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343236/" ] }
508,724
I'm using a docker image as a base for my own development that adds the jessie backports repository in its Dockerfile and uses that to install a dependency. This image uses the following command to add the repository: echo "deb http://ftp.debian.org/debian jessie-backports main" >> /etc/apt/sources.list The problem is that fetching packages from the backports repository now fails with the following error (this used to work previously): W: Failed to fetch http://ftp.debian.org/debian/dists/jessie-backports/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/main/binary-amd64/Packages 404 Not Found I looked on that server, and those paths are indeed not present there. I tried to figure out on the Debian backports site whether this particular repository should still be available, and I didn't find any indication that this was deprecated or something like that. Is this a temporary issue with the repository, or is the jessie-backports repository not available anymore? And if this is not a temporary issue, what options do I have to use this or an equivalent repository without upgrading to the newer Debian stable version?
Wheezy and Jessie were recently removed from the mirror network , so if you want to continue fetching Jessie backports, you need to use archive.debian.org instead: deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main (Validity checks need to be disabled since the repository is no longer being updated. Jessie’s apt doesn’t support the check-valid-until flag, see inostia’s answer for details, and the configuration summary further down in this answer.) The jessie-updates repository has been removed: all the updates have been merged with the main repository, and there will be no further non-security updates. So any references to jessie-updates in sources.list or sources.list.d files need to be removed. Security updates will continue to be provided , on LTS-supported architectures, in the security repository, until June 30, 2020. Since you’re building a container image, I highly recommend basing it on Debian 9 (Stretch) instead. To stay on Debian 8 (Jessie), your repositories should end up looking like deb http://cdn-fastly.deb.debian.org/debian/ jessie main deb-src http://cdn-fastly.deb.debian.org/debian/ jessie main deb http://security.debian.org/ jessie/updates main deb-src http://security.debian.org/ jessie/updates main deb http://archive.debian.org/debian jessie-backports main deb-src http://archive.debian.org/debian jessie-backports main (without the jessie-updates repository). You’ll also need to disable validity checks in /etc/apt/apt.conf (which will apply to all repositories): Acquire::Check-Valid-Until "false";
{ "source": [ "https://unix.stackexchange.com/questions/508724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343693/" ] }
508,756
I've hacked together this bash script which tests if the user has superuser privileges and if they don't it asks for them. Ultimately I'm trying to invert the second 'if' statement so that I can remove the following two lines (the echo "password ok" and the else on the following line) # Root user only if [[ "$EUID" != 0 ]]; then sudo -k # make sure to ask for password on next sudo if sudo true; then echo "Password ok" else echo "Aborting script" exit 1 fi fi echo "do my ops" Is the purpose of "true" on the fourth line just a null-statement? I need to invert the test on the fourth line, how do I do so? Here is what I've tried: if sudo false; then if sudo true == false; then if [!(sudo true)]; then
true in bash isn't a keyword, it's a program that instantly exits with a successful exit code. Likewise, false is a program that exits with an unsuccessful exit code. You can try this out by running both programs from your terminal, and then reading the $? variable, which contains the exit code of the last program; true echo $? # 0 false echo $? #1 if sudo true isn't equivalent to if sudo == true . if sudo true is running the true program using sudo , and checking the exit code. Therefore: if sudo false; then is running the program false as sudo. The return will always be false. if sudo true == false will run the program true with the arguments == and false using sudo . This obviously isn't want you intended. if [!(sudo true)] is invalid syntax. What you are probably looking for is if ! sudo true;
{ "source": [ "https://unix.stackexchange.com/questions/508756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85659/" ] }
509,173
There's a script (let's call it echoer ) that prints to screen a bunch of information. I'd like to be able to only see lines after a pattern is found. I imagine the usage of a solution to look something like echoer | solution_command <pattern> Ideally pattern would be a regular expression, but hard value strings would be enough for me.
AWK can do this with pattern ranges, which allows the use of any regular expression: echoer | awk '/pattern/,0' will print echoer ’s output starting with the first line matching pattern . AWK is pattern-based, and is typically used with a “if this pattern matches, do this” type of approach. “This pattern” can be a range of patterns, defined as “when this pattern matches, start doing this, until this other pattern matches”; this is specified by writing two patterns separated by a comma, as above. Patterns can be text matches, as in /pattern/ , where the current line is checked against the pattern, interpreted as a regular expression; they can also be general expressions, evaluated for every line, and considered to match if their result is non-zero or non-empty. In AWK, the default action is to print the current line. Putting all this together, awk '/pattern/,0' looks for lines matching pattern , and once it finds one, applies the default action to all lines until the 0 condition matches (is non-zero). awk '/pattern/,""' would work too. The Gawk manual goes into much more detail.
{ "source": [ "https://unix.stackexchange.com/questions/509173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/344115/" ] }
509,178
I'm searching a folder in s3 bucket using this command aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3' It's getting results like dir1/dir2/dir3/1/aaa.txt dir1/dir2/dir3/1/bbb.txt dir1/dir2/dir3/1/ccc.txt However, I need only path of that file like dir1/dir2/dir3 I can able remove unnecessary text to get directory path by this aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir2' | head -n 1 | sed 's/1.*//' But this is not working with multiple string search in grep aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3\|folder3' I need output like this dir1/dir2/dir3 folder1/folder2/folder3
AWK can do this with pattern ranges, which allows the use of any regular expression: echoer | awk '/pattern/,0' will print echoer ’s output starting with the first line matching pattern . AWK is pattern-based, and is typically used with a “if this pattern matches, do this” type of approach. “This pattern” can be a range of patterns, defined as “when this pattern matches, start doing this, until this other pattern matches”; this is specified by writing two patterns separated by a comma, as above. Patterns can be text matches, as in /pattern/ , where the current line is checked against the pattern, interpreted as a regular expression; they can also be general expressions, evaluated for every line, and considered to match if their result is non-zero or non-empty. In AWK, the default action is to print the current line. Putting all this together, awk '/pattern/,0' looks for lines matching pattern , and once it finds one, applies the default action to all lines until the 0 condition matches (is non-zero). awk '/pattern/,""' would work too. The Gawk manual goes into much more detail.
{ "source": [ "https://unix.stackexchange.com/questions/509178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/344117/" ] }
509,232
On my Debian GNU/Linux 9 system, when a binary is executed, the stack is uninitialized but the heap is zero-initialized. Why? I assume that zero-initialization promotes security but, if for the heap, then why not also for the stack? Does the stack, too, not need security? My question is not specific to Debian as far as I know. Sample C code: #include <stddef.h> #include <stdlib.h> #include <stdio.h> const size_t n = 8; // -------------------------------------------------------------------- // UNINTERESTING CODE // -------------------------------------------------------------------- static void print_array( const int *const p, const size_t size, const char *const name ) { printf("%s at %p: ", name, p); for (size_t i = 0; i < size; ++i) printf("%d ", p[i]); printf("\n"); } // -------------------------------------------------------------------- // INTERESTING CODE // -------------------------------------------------------------------- int main() { int a[n]; int *const b = malloc(n*sizeof(int)); print_array(a, n, "a"); print_array(b, n, "b"); free(b); return 0; } Output: a at 0x7ffe118997e0: 194 0 294230047 32766 294230046 32766 -550453275 32713 b at 0x561d4bbfe010: 0 0 0 0 0 0 0 0 The C standard does not ask malloc() to clear memory before allocating it, of course, but my C program is merely for illustration. The question is not a question about C or about C's standard library. Rather, the question is a question about why the kernel and/or run-time loader are zeroing the heap but not the stack. ANOTHER EXPERIMENT My question regards observable GNU/Linux behavior rather than the requirements of standards documents. If unsure what I mean, then try this code, which invokes further undefined behavior ( undefined, that is, as far as the C standard is concerned) to illustrate the point: #include <stddef.h> #include <stdlib.h> #include <stdio.h> const size_t n = 4; int main() { for (size_t i = n; i; --i) { int *const p = malloc(sizeof(int)); printf("%p %d ", p, *p); ++*p; printf("%d\n", *p); free(p); } return 0; } Output from my machine: 0x555e86696010 0 1 0x555e86696010 0 1 0x555e86696010 0 1 0x555e86696010 0 1 As far as the C standard is concerned, behavior is undefined, so my question does not regard the C standard. A call to malloc() need not return the same address each time but, since this call to malloc() does indeed happen to return the same address each time, it is interesting to notice that the memory, which is on the heap, is zeroed each time. The stack, by contrast, had not seemed to be zeroed. I do not know what the latter code will do on your machine, since I do not know which layer of the GNU/Linux system is causing the observed behavior. You can but try it. UPDATE @Kusalananda has observed in comments: For what it's worth, your most recent code returns different addresses and (occasional) uninitialised (non-zero) data when run on OpenBSD. This obviously does not say anything about the behaviour that you are witnessing on Linux. That my result differs from the result on OpenBSD is indeed interesting. Apparently, my experiments were discovering not a kernel (or linker) security protocol, as I had thought, but a mere implementational artifact. In this light, I believe that, together, the answers below of @mosvy, @StephenKitt and @AndreasGrapentin settle my question. See also on Stack Overflow: Why does malloc initialize the values to 0 in gcc? (credit: @bta).
The storage returned by malloc() is not zero-initialized. Do not ever assume it is. In your test program, it's just a fluke: I guess the malloc() just got a fresh block off mmap() , but don't rely on that, either. For an example, if I run your program on my machine this way: $ echo 'void __attribute__((constructor)) p(void){ void *b = malloc(4444); memset(b, 4, 4444); free(b); }' | cc -include stdlib.h -include string.h -xc - -shared -o pollute.so $ LD_PRELOAD=./pollute.so ./your_program a at 0x7ffd40d3aa60: 1256994848 21891 1256994464 21891 1087613792 32765 0 0 b at 0x55834c75d010: 67372036 67372036 67372036 67372036 67372036 67372036 67372036 67372036 Your second example is simply exposing an artifact of the malloc implementation in glibc; if you do that repeated malloc / free with a buffer larger than 8 bytes, you will clearly see that only the first 8 bytes are zeroed, as in the following sample code. #include <stddef.h> #include <stdlib.h> #include <stdio.h> const size_t n = 4; const size_t m = 0x10; int main() { for (size_t i = n; i; --i) { int *const p = malloc(m*sizeof(int)); printf("%p ", p); for (size_t j = 0; j < m; ++j) { printf("%d:", p[j]); ++p[j]; printf("%d ", p[j]); } free(p); printf("\n"); } return 0; } Output: 0x55be12864010 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0x55be12864010 0:1 0:1 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 0x55be12864010 0:1 0:1 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 0x55be12864010 0:1 0:1 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4
{ "source": [ "https://unix.stackexchange.com/questions/509232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18202/" ] }
509,915
Although I like Debian for various reasons, it is not always easy to find some documentation on specific aspects of this distribution and its policies. My question is: what is the difference between contrib and non-free packages repositories? From the little explanations I could find, if I am not mistaken: non-free is for packages whose licences are not free contrib for dependencies of non-free packages (which make them not part of Debian) But it seems odd to me to have two repositories for, so to speak, the same purpose, which is making available non free software inside Debian. I would like to know if I am missing something here.
non-free packages are packages not complying to the Debian Free Software Guidelines definition. E.g.: nvidia-driver which provides a proprietary driver. contrib packages are packages that do comply with the DFSG, but depend on non-free packages, or which depend on some non-free software downloaded (by the package or having to be downloaded manually) to work properly. So they don't end up in main . E.g.: bumblebee-nvidia which while DFSG compliant, isn't really useful without the non-free package nvidia-driver , so it's put in the contrib section, or vice which requires to download (while respecting copyrights and laws) ROMs to work properly. If somebody doesn't want to or can't use non-free software, that person most probably doesn't need or won't be able to use software depending on it, so it's more useful to put them separately in a contrib section. That person won't even have to download the contrib section. UPDATE: the software in contrib , and its sources are still available for free use. Interesting parts could be reused in an other project, or the non-free (or non available) parts it depends upon could be replaced (e.g.: replace graphics, music etc. assets for a game engine in contrib ). Having it separate from non-free helps to know which parts can be reused. trivia: the Open Source Definition was initially created by removing any mention of Debian in DFSG.
{ "source": [ "https://unix.stackexchange.com/questions/509915", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159254/" ] }
510,031
My proc info: lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz Stepping: 9 CPU MHz: 1036.788 CPU max MHz: 3500,0000 CPU min MHz: 800,0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 6144K NUMA node0 CPU(s): 0-3 I tried: sudo apt-get install gcc-arm-linux-gnueabi g++-arm-linux-gnueabi If I go for: arm-linux-gcc arm-linux-gcc: command not found How to install cross-compiler?
TLDR you need to call arm-linux-gnueabi-gcc not arm-linux-gcc . It looks like you've just got the wrong file name. For reference apt-file is a useful tool. sudo apt-get install apt-file sudo apt-file update apt-file search -x 'gcc$' | grep 'gcc-arm-linux-gnueabi' This searches any file ending gcc in any package with gcc-arm-linux-gnueabi in the name. The result is: gcc-arm-linux-gnueabi: /usr/bin/arm-linux-gnueabi-gcc So if you have installed gcc-arm-linux-gnueabi you should have a file /usr/bin/arm-linux-gnueabi-gcc .
{ "source": [ "https://unix.stackexchange.com/questions/510031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143955/" ] }
510,046
I can print current directory using pwd , but this gives me the path I navigated to get to where I am. I need to know which disk/partition current directory is on. For example, if I create symlink user@pc:~$ ln -s /media/HD1 hard_disk and then navigate to ~/hard_disk and run pwd it will print /home/user/hard_disk . I would like to get the actual path I'm currently on or better just the actual filesystem I'm currently on, which corresponds to one in df .
pwd -P will give you the physical directory you are in, i.e. the pathname of the current working directory with the symbolic links resolved. Using df . would give you the df output for whatever partition the current directory is residing on. Example (on an OpenBSD machine): $ pwd /usr/ports $ pwd -P /extra/ports $ df . Filesystem 512-blocks Used Avail Capacity Mounted on /dev/sd3a 103196440 55987080 42049540 57% /extra To parse out the mountpoint from this output, you may use something like $ df -P . | sed -n '$s/[^%]*%[[:blank:]]*//p' /extra To parse out the filesystem device used, use $ df -P . | sed -n '$s/[[:blank:]].*//p' /dev/sd3a I believe some Linux systems also supports findmnt --target . (where --target . can be replaced by -T . ) or, for more terse output, findmnt --output target --noheadings --target . (where --noheadings may be replaced by -n , and --output target may be replaced by -o target ) to get the mountpoint holding the filesystem that the current directory is located on. Use --output source to get the mounted device node.
{ "source": [ "https://unix.stackexchange.com/questions/510046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149200/" ] }
510,216
This works on a shell (bash, dash) prompt: [ -z "" ] && echo A || echo B A However, I am trying to write a POSIX shell script, it starts like this: #!/bin/sh [ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one."; exit 1 readonly raw_input_string=${1} [ -z "${raw_input_string}" ] && echo "The given argument is empty."; exit 1 And I don't know why, but I don't get the message : The given argument is empty. if I call the script like this: ./test_empty_argument "" Why is that?
Note that your line [ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one."; exit 1 this is the same as [ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one." exit 1 (an unquoted ; can, in most circumstances, be replaced by a newline character) This means that the exit 1 statement is always executed regardless of how many arguments were passed to the script. This in turn means that the message The given argument is empty. would never have a chance of getting printed. To execute more than a single statement after a test using the "short-circuit syntax", group the statements in { ...; } . The alternative is to use a proper if statement (which, IMHO, looks cleaner in a script): if [ "$#" -ne 1 ]; then echo 'Invalid number of arguments, expected one.' >&2 exit 1 fi You have the same issue with your second test. Regarding [ -z "" ] && echo A || echo B This would work for the given example, but the generic some-test && command1 || command2 would not be the same as if some-test; then command1 else command2 fi Instead, it is more like if ! { some-test && command1; }; then command2 fi or if some-test && command1; then : else command2 fi That is, if either the test or the first command fails, the second command executes, which means it has the potential to execute all three involved statements.
{ "source": [ "https://unix.stackexchange.com/questions/510216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
510,220
After reading ilkkachu's answer to this question I learned on the existence of the declare (with argument -n ) shell built in. help declare brings: Set variable values and attributes. Declare variables and give them attributes. If no NAMEs are given, display the attributes and values of all variables. -n ... make NAME a reference to the variable named by its value I ask for a general explanation with an example regarding declare because I don't understand the man . I know what is a variable and expanding it but I still miss the man on declare (variable attribute?). Maybe you'd like to explain this based on the code by ilkkachu in the answer: #!/bin/bash function read_and_verify { read -p "Please enter value for '$1': " tmp1 read -p "Please repeat the value to verify: " tmp2 if [ "$tmp1" != "$tmp2" ]; then echo "Values unmatched. Please try again."; return 2 else declare -n ref="$1" ref=$tmp1 fi }
In most cases it is enough with an implicit declaration in bash asdf="some text" But, sometimes you want a variable's value to only be integer (so in case it would later change, even automatically, it could only be changed to an integer, defaults to zero in some cases), and can use: declare -i num or declare -i num=15 Sometimes you want arrays, and then you need declare declare -a asdf # indexed type or declare -A asdf # associative type You can find good tutorials about arrays in bash when you browse the internet with the search string 'bash array tutorial' (without quotes), for example linuxconfig.org/how-to-use-arrays-in-bash-script I think these are the most common cases when you declare variables. Please notice also, that in a function, declare makes the variable local (in the function) without any name, it lists all variables (in the active shell) declare Finally, you get a brief summary of the features of the shell built-in command declare in bash with the command help declare
{ "source": [ "https://unix.stackexchange.com/questions/510220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
510,338
Windows executable files ( New or Portable executables ) can contain icons. How can I extract them, either as ICO files or separate images?
There are a number of tools you can use. icoutils , available as the eponymous package in many distributions, includes a tool capable of extracting resources from most Windows executables (16-bit NE, 32-bit PE, and 64-bit PE+), wrestool . wrestool -x --output=. -t14 /path/to/windows.exe will extract the icons present in the given Windows executable and write them to individual files, named after the executable name, with the type and icon name added. 7z can also extract all the resources in a Windows executable; 7z x /path/to/windows.exe .rsrc/ICON will extract all the icons in the given Windows executable and write them to individual files in the .rsrc/ICON directory.
{ "source": [ "https://unix.stackexchange.com/questions/510338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13308/" ] }
510,553
It's not completely clear to me, but what is the difference between mv and rename (from util-linux-ng 2.17.2 as /usr/bin/rename )? Are there advantages of one over the other beyond rename accepting regular expressions and mv doesn't? I believe rename can also handle multiple file renames at once, whereas mv does not do this. I couldn't find a clear indication in their man pages what else sets them apart or through some investigation on my own.
It's basically what it says on the lid, for both. mv is a standard utility to move one or more files to a given target. It can be used to rename a file, if there's only one file to move. If there are several, mv only works if the target is directory, and moves the files there. So mv foo bar will either move the file foo to the directory bar (if it exists), or rename foo to bar (if bar doesn't exist or isn't a directory). mv foo1 foo2 bar will just move both files to directory bar , or complain if bar isn't a directory. mv will call the rename() C library function to move the files, and if that doesn't work (they're being moved to another filesystem), it will copy the files and remove the originals. If all you have is mv and you want to rename multiple files, you'll have to use a shell loop. There are a number of questions on that here on the site, see e.g. this , this , and others . On the other hand, the various rename utilities rename files, individually. The rename from util-linux which you mention makes a simple string substitution, e.g. rename foo bar * would change foobar to barbar , and asdffoo to asdfbar . It does not , however, take a regular expression! The Perl rename utility ( or various instances of it ) takes a Perl expression to transform the filenames. One will most likely use an s/ pattern / replacement / command, where the pattern is a regular expression. Both the util-linux rename and the Perl rename can be used to move files to another directory at the same time, by making appropriate changes to the file name, but it's a bit awkward. Neither does more than call rename() on the files, so moving from one filesystem to another does not work. As for which rename you have, it may depend on your distribution , and/or what you have installed. Most of them support rename --version , so use that to identify which one you have.
{ "source": [ "https://unix.stackexchange.com/questions/510553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293632/" ] }
510,601
We have a huge text file containing millions of ordered timestamped observations and given the start point and the end point, we need a fast method to extract the observations in that period. For instance, this could be part of the file: "2018-04-05 12:53:00",28,13.6,7.961,1746,104.7878,102.2,9.78,29.1,0,2.432,76.12,955,38.25,249.9,362.4,281.1,0.04 "2018-04-05 12:54:00",29,13.59,7.915,1738,104.2898,102.2,10.01,29.53,0,1.45,200.3,952,40.63,249.3,361.4,281.1,0.043 "2018-04-05 12:55:00",30,13.59,7.907,1734,104.0326,102.2,10.33,28.79,0,2.457,164.1,948,41.39,249.8,361.3,281.1,0.044 "2018-04-05 12:56:00",31,13.59,7.937,1718,103.0523,102.2,10.72,31.42,0,1.545,8.22,941,42.06,249.4,361.1,281.1,0.045 "2018-04-05 12:57:00",32,13.59,7.975,1719,103.1556,102.2,10.68,29.26,0,2.541,0.018,940,41.95,249.1,360.1,281.1,0.045 "2018-04-05 12:58:00",33,13.59,8,1724,103.4344,102.2,10.35,29.58,0,1.908,329.8,942,42.65,249.5,361.4,281.1,0.045 "2018-04-05 12:59:00",34,13.59,8,1733,103.9831,102.2,10.23,30.17,0,2.59,333.1,948,42.21,250.2,362,281.2,0.045 "2018-04-05 13:00:00",35,13.59,7.98,1753,105.1546,102.2,10.17,29.06,0,3.306,332.4,960,42,250.4,362.7,281.1,0.044 "2018-04-05 13:01:00",36,13.59,7.964,1757,105.3951,102.2,10.24,30.75,0,2.452,0.012,962,42.03,250.4,362.4,281.1,0.044 "2018-04-05 13:02:00",37,13.59,7.953,1757,105.4047,102.2,10.31,31.66,0,3.907,2.997,961,41.1,250.6,362.4,281.1,0.043 "2018-04-05 13:03:00",38,13.59,7.923,1758,105.4588,102.2,10.28,29.64,0,4.336,50.19,962,40.85,250.3,362.6,281.1,0.042 "2018-04-05 13:04:00",39,13.59,7.893,1757,105.449,102.1,10.27,30.42,0,1.771,12.98,962,41.73,249.8,362.1,281.1,0.043 "2018-04-05 13:05:00",40,13.6,7.89,1757,105.4433,102.1,10.46,29.54,0,2.296,93.7,962,43.02,249.9,361.7,281,0.045 "2018-04-05 13:06:00",41,13.59,7.915,1756,105.3322,102.1,10.52,29.53,0,0.632,190.8,961,43.64,249.3,361.5,281,0.045 "2018-04-05 13:07:00",42,13.6,7.972,1758,105.4697,102.1,10.77,29.49,0,0.376,322.5,961,44.69,249.1,360.9,281.1,0.046 "2018-04-05 13:08:00",43,13.6,8.05,1754,105.233,102.1,11.26,28.66,0,0.493,216.8,959,44.8,248.4,360.1,281.2,0.047 If we want the datapoints between "2018-04-05 13:00:00" and "2018-04-05 13:05:00", the output should be: "2018-04-05 13:00:00",35,13.59,7.98,1753,105.1546,102.2,10.17,29.06,0,3.306,332.4,960,42,250.4,362.7,281.1,0.044 "2018-04-05 13:01:00",36,13.59,7.964,1757,105.3951,102.2,10.24,30.75,0,2.452,0.012,962,42.03,250.4,362.4,281.1,0.044 "2018-04-05 13:02:00",37,13.59,7.953,1757,105.4047,102.2,10.31,31.66,0,3.907,2.997,961,41.1,250.6,362.4,281.1,0.043 "2018-04-05 13:03:00",38,13.59,7.923,1758,105.4588,102.2,10.28,29.64,0,4.336,50.19,962,40.85,250.3,362.6,281.1,0.042 "2018-04-05 13:04:00",39,13.59,7.893,1757,105.449,102.1,10.27,30.42,0,1.771,12.98,962,41.73,249.8,362.1,281.1,0.043 "2018-04-05 13:05:00",40,13.6,7.89,1757,105.4433,102.1,10.46,29.54,0,2.296,93.7,962,43.02,249.9,361.7,281,0.045 Regular tools like grep or sed or awk are not optimized to be applied to sorted files. So they are not fast enough for. A tool which uses a binary search would be ideal for this type of problems.
It's basically what it says on the lid, for both. mv is a standard utility to move one or more files to a given target. It can be used to rename a file, if there's only one file to move. If there are several, mv only works if the target is directory, and moves the files there. So mv foo bar will either move the file foo to the directory bar (if it exists), or rename foo to bar (if bar doesn't exist or isn't a directory). mv foo1 foo2 bar will just move both files to directory bar , or complain if bar isn't a directory. mv will call the rename() C library function to move the files, and if that doesn't work (they're being moved to another filesystem), it will copy the files and remove the originals. If all you have is mv and you want to rename multiple files, you'll have to use a shell loop. There are a number of questions on that here on the site, see e.g. this , this , and others . On the other hand, the various rename utilities rename files, individually. The rename from util-linux which you mention makes a simple string substitution, e.g. rename foo bar * would change foobar to barbar , and asdffoo to asdfbar . It does not , however, take a regular expression! The Perl rename utility ( or various instances of it ) takes a Perl expression to transform the filenames. One will most likely use an s/ pattern / replacement / command, where the pattern is a regular expression. Both the util-linux rename and the Perl rename can be used to move files to another directory at the same time, by making appropriate changes to the file name, but it's a bit awkward. Neither does more than call rename() on the files, so moving from one filesystem to another does not work. As for which rename you have, it may depend on your distribution , and/or what you have installed. Most of them support rename --version , so use that to identify which one you have.
{ "source": [ "https://unix.stackexchange.com/questions/510601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286985/" ] }
511,740
Recently I started using tmux inside my terminal on my Mac. However now whenever I'm in a tmux session and I scroll up or down using my mouse, it scrolls through my command history instead of scrolling through my terminal pane. How do I disable this feature and make mouse scrolling go back to the default behavior?
Run this command: $ tput rmcup What happened most likely is that you were, either locally or remotely, running a command (like vim , or top , or many programs that use libraries similar to ncurses ) that uses the terminal's "alternate screen" mode. When this is active, many terminal programs helpfully remap the scrolling action on the mouse to arrow keys, because generally scrolling the local display is less than helpful. If this application terminated ungracefully, your terminal may still think it's in that mode. This command resets this, and should re-enable your ability to scroll. I'm guessing you're using iTerm?
{ "source": [ "https://unix.stackexchange.com/questions/511740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345760/" ] }
511,751
This cron tqsk doesn't work: [main_usr@localhost ~]$ sudo crontab -l -u root 0 * * * * /home/main_usr/cron_test1.sh > /home/main_usr/cron_test1_out.sh.out 2>&1 [main_usr@localhost ~]$ And $ ls -al cron_test1.sh -rwxr-xr-x 1 main_usr main_usr 293 Apr 8 05:12 cron_test1.sh As you can see, there's a new line in the cron tasks. And the file exists and is executable. The task was created a day ago. It should've run once in an hour. Nonetheless, 'cron_test1_out.sh.out' hasn't been created. Why?
Run this command: $ tput rmcup What happened most likely is that you were, either locally or remotely, running a command (like vim , or top , or many programs that use libraries similar to ncurses ) that uses the terminal's "alternate screen" mode. When this is active, many terminal programs helpfully remap the scrolling action on the mouse to arrow keys, because generally scrolling the local display is less than helpful. If this application terminated ungracefully, your terminal may still think it's in that mode. This command resets this, and should re-enable your ability to scroll. I'm guessing you're using iTerm?
{ "source": [ "https://unix.stackexchange.com/questions/511751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345112/" ] }
511,827
Many people use oneliners and scripts containing code along the lines cat "$MYFILE" | command1 | command2 > "$OUTPUT" The first cat is often called "useless use of cat" because technically it requires starting a new process (often /usr/bin/cat ) where this could be avoided if the command had been < "$MYFILE" command1 | command2 > "$OUTPUT" because then shell only needs to start command1 and simply point its stdin to the given file. Why doesn't the shell do this conversion automatically? I feel that the "useless use of cat" syntax is easier to read and shell should have enough information to get rid of useless cat automatically. The cat is defined in POSIX standard so shell should be allowed to implement it internally instead of using a binary in path. The shell could even contain implementation only for exactly one argument version and fallback to binary in path.
"Useless use of cat " is more about how you write your code than about what actually runs when you execute the script. It's a sort of design anti-pattern , a way of going about something that could probably be done in a more efficient manner. It's a failure in understanding of how to best combine the given tools to create a new tool. I'd argue that stringing several sed and/or awk commands together in a pipeline also sometimes could be said to be a symptom of this same anti-pattern. Fixing instances of "useless use of cat " in a script is a primarily matter of fixing the source code of the script manually. A tool such as ShellCheck can help with this by pointing out the obvious cases: $ cat script.sh #!/bin/sh cat file | cat $ shellcheck script.sh In script.sh line 2: cat file | cat ^-- SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead. Getting the shell to do this automatically would be difficult due to the nature of shell scripts. The way a script executes depends on the environment inherited from its parent process, and on the specific implementation of the available external commands. The shell does not necessarily know what cat is. It could potentially be any command from anywhere in your $PATH , or a function. If it was a built-in command (which it may be in some shells), it would have the ability to reorganise the pipeline as it would know of the semantics of its built-in cat command. Before doing that, it would additionally have to make assumptions about the next command in the pipeline, after the original cat . Note that reading from standard input behaves slightly differently when it's connected to a pipe and when it's connected to a file. A pipe is not seekable, so depending on what the next command in the pipeline does, it may or may not behave differently if the pipeline was rearranged (it may detect whether the input is seekable and decide to do things differently if it is or if it isn't, in any case it would then behave differently). This question is similar (in a very general sense) to " Are there any compilers that attempt to fix syntax errors on their own? " (at the Software Engineering StackExchange site), although that question is obviously about syntax errors, not useless design patterns. The idea about automatically changing the code based on intent is largely the same though.
{ "source": [ "https://unix.stackexchange.com/questions/511827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20336/" ] }
512,362
I have a directory with over 400 GiB of data in it. I wanted to check that all the files can be read without errors, so a simple way I thought of was to tar it into /dev/null . But instead I see the following behavior: $ time tar cf /dev/null . real 0m4.387s user 0m3.462s sys 0m0.185s $ time tar cf - . > /dev/null real 0m3.130s user 0m3.091s sys 0m0.035s $ time tar cf - . | cat > /dev/null ^C real 10m32.985s user 0m1.942s sys 0m33.764s The third command above was forcibly stopped by Ctrl + C after having run for quite long already. Moreover, while the first two commands were working, activity indicator of the storage device containing . was nearly always idle. With the third command the indicator is constantly lit up, meaning extreme busyness. So it seems that, when tar is able to find out that its output file is /dev/null , i.e. when /dev/null is directly opened to have the file handle which tar writes to, file body appears skipped. (Adding v option to tar does print all the files in the directory being tar 'red.) So I wonder, why is this so? Is it some kind of optimization? If yes, then why would tar even want to do such a dubious optimization for such a special case? I'm using GNU tar 1.26 with glibc 2.27 on Linux 4.14.105 amd64.
It is a documented optimization : When the archive is being created to /dev/null , GNU tar tries to minimize input and output operations. The Amanda backup system, when used with GNU tar, has an initial sizing pass which uses this feature.
{ "source": [ "https://unix.stackexchange.com/questions/512362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27672/" ] }
512,702
This command, when run alone, produces the expected result (the last line of the crontab): tail -n 1 /etc/crontab However, when I run it as part of an echo command to send the result to a file, it adds a summary of all the files in the working directory, plus the expected result: sudo bash -c 'echo $(tail -n 1 /etc/crontab) > /path/to/file' Why did this command produce the extra data?
Your crontab line has one or more asterisks * in it, indicating "any time". When that line is substituted in from the command substitution, the result is something like echo * * * * * cmd > /path/to/file While most further expansions are not applied to the output of command substitution, pathname expansion is (as is field splitting) : The results of command substitution shall not be processed for further tilde expansion, parameter expansion, command substitution, or arithmetic expansion. If a command substitution occurs inside double-quotes, field splitting and pathname expansion shall not be performed on the results of the substitution. Pathname expansion is what turns *.txt into a list of matching filenames (globbing), where * matches everything. The end result is that you get every (non-hidden) filename in the working directory listed for every * in your crontab line. You could fix this by quoting the expansion, if the code you posted was a representative of a more complex command: sudo bash -c 'echo "$(tail -n 1 /etc/crontab)" > /path/to/file' but more straightforwardly just lose the echo entirely: sudo bash -c 'tail -n 1 /etc/crontab > /path/to/file' This should do what you want and it's simpler as well (the only other material difference is that this version will omit field splitting that would otherwise have occurred, so runs of spaces won't be collapsed).
{ "source": [ "https://unix.stackexchange.com/questions/512702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36311/" ] }
512,717
I need to build an RPM for a Java software package on MacOS. I'm using rpmbuild from homebrew, version 4.14.2.1. The toolchain is set up correctly, and I get a valid RPM at the end. There is just one snag: The RPM has a target OS string of "darwin", since it was built there, and attempting to install it on a normal Linux (think CentOS) fails with the message Transaction check error: package myrpm.noarch is intended for a different operating system and indeed, querying the RPM confirms the reason: # rpm -qp --qf '%{os}\n' myrpm.noarch.rpm darwin In order not to change my source tree, I'd like to put the necessary properties in a local .rpmrc file. How would I need to set it up so that I get a truly platform-independent RPM in the end? To clarify this: The rpm contains software and paths that work on any system with a Java 8 JRE and a POSIX-like file system. It should at least be installable on darwin/MacOS and the Redhat / CentOS / SuSE universe.
Your crontab line has one or more asterisks * in it, indicating "any time". When that line is substituted in from the command substitution, the result is something like echo * * * * * cmd > /path/to/file While most further expansions are not applied to the output of command substitution, pathname expansion is (as is field splitting) : The results of command substitution shall not be processed for further tilde expansion, parameter expansion, command substitution, or arithmetic expansion. If a command substitution occurs inside double-quotes, field splitting and pathname expansion shall not be performed on the results of the substitution. Pathname expansion is what turns *.txt into a list of matching filenames (globbing), where * matches everything. The end result is that you get every (non-hidden) filename in the working directory listed for every * in your crontab line. You could fix this by quoting the expansion, if the code you posted was a representative of a more complex command: sudo bash -c 'echo "$(tail -n 1 /etc/crontab)" > /path/to/file' but more straightforwardly just lose the echo entirely: sudo bash -c 'tail -n 1 /etc/crontab > /path/to/file' This should do what you want and it's simpler as well (the only other material difference is that this version will omit field splitting that would otherwise have occurred, so runs of spaces won't be collapsed).
{ "source": [ "https://unix.stackexchange.com/questions/512717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7932/" ] }
512,759
First some specs: my computer is an HP EliteBook 8460p. It comes with an integrated Chicony HP HD webcam. My issue is that a lot of applications (well, at least Skype and guvcview) are displaying multiple lines for the same webcam; indeed, if I do ls -l /dev | grep video , I get the following: crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0 crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0 crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0 crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1 I have 2 /dev/video[n] with only one (integrated) webcam; Skype will work properly with /dev/video0 , but not with /dev/video1 . Same for guvcview. If I plug another USB webcam, for example a logitech one, I get the following with dmesg : [21222.638802] usb 2-2: new high-speed USB device number 20 using xhci_hcd [21222.970684] usb 2-2: New USB device found, idVendor=046d, idProduct=08c2, bcdDevice= 0.05 [21222.970755] usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [21222.972518] uvcvideo: Found UVC 1.00 device <unnamed> (046d:08c2) [21226.044535] uvcvideo 2-2:1.0: Entity type for entity Extension 4 was not initialized! [21226.044538] uvcvideo 2-2:1.0: Entity type for entity Extension 8 was not initialized! [21226.044540] uvcvideo 2-2:1.0: Entity type for entity Extension 10 was not initialized! [21226.044541] uvcvideo 2-2:1.0: Entity type for entity Extension 9 was not initialized! [21226.044543] uvcvideo 2-2:1.0: Entity type for entity Extension 3 was not initialized! [21226.044545] uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized! [21226.044547] uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized! [21226.044746] input: UVC Camera (046d:08c2) as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2:1.0/input/input35 [21226.137559] usb 2-2: Warning! Unlikely big volume range (=3072), cval->res is probably wrong. [21226.137569] usb 2-2: [5] FU [Mic Capture Volume] ch = 1, val = 4608/7680/1 And the following with ls -l /dev/ | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0 crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0 crw-rw---- 1 root video 243, 1 Apr 16 14:06 media1 crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0 crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1 crw-rw----+ 1 root video 81, 2 Apr 16 14:06 video2 crw-rw----+ 1 root video 81, 3 Apr 16 14:06 video3 3 new entries: /dev/media1 , /dev/video2 and /dev/video3 . I even found a Sony webcam (CEVCECM) that adds up to 4 new devices. The dmesg logs: [21927.665747] usb 2-2: new high-speed USB device number 23 using xhci_hcd [21927.817330] usb 2-2: New USB device found, idVendor=05e3, idProduct=0608, bcdDevice= 9.01 [21927.817339] usb 2-2: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [21927.817343] usb 2-2: Product: USB2.0 Hub [21927.824119] hub 2-2:1.0: USB hub found [21927.824814] hub 2-2:1.0: 4 ports detected [21928.113733] usb 2-2.4: new high-speed USB device number 24 using xhci_hcd [21928.223184] usb 2-2.4: New USB device found, idVendor=054c, idProduct=097b, bcdDevice=21.12 [21928.223192] usb 2-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [21928.223197] usb 2-2.4: Product: CEVCECM [21928.223201] usb 2-2.4: Manufacturer: Sony [21928.223206] usb 2-2.4: SerialNumber: DHZD10412EUHK1 [21928.227506] uvcvideo: Found UVC 1.00 device CEVCECM (054c:097b) [21928.242592] uvcvideo: Unable to create debugfs 2-24 directory. [21928.242780] uvcvideo 2-2.4:1.0: Entity type for entity Extension 7 was not initialized! [21928.242783] uvcvideo 2-2.4:1.0: Entity type for entity Extension 3 was not initialized! [21928.242785] uvcvideo 2-2.4:1.0: Entity type for entity Processing 2 was not initialized! [21928.242787] uvcvideo 2-2.4:1.0: Entity type for entity Camera 1 was not initialized! [21928.242877] input: CEVCECM: CEVCECM as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2.4/2-2.4:1.0/input/input38 And the resulting device files with ls -l /dev | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0 crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0 crw-rw---- 1 root video 243, 1 Apr 16 14:18 media1 crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0 crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1 crw-rw----+ 1 root video 81, 2 Apr 16 14:18 video2 crw-rw----+ 1 root video 81, 3 Apr 16 14:18 video3 crw-rw----+ 1 root video 81, 4 Apr 16 14:18 video4 crw-rw----+ 1 root video 81, 5 Apr 16 14:18 video5 5 new entries: /dev/media1 and /dev/video2 to /dev/video5 . I feel like the correct files to use are the /dev/media[n] ones, but Skype and guvcview somehow fail to do so and fallback to the /dev/video[n] . I don't have this issue with Webcamoid for example. If anyone has an idea, I take it. In the meantime I will continue the investigation... --- Edited the 2019-05-14 --- Got some interesting information using v4l2-ctl --device=/dev/video* --all . For the Chicony HP HD webcam, its 2 device files have different device capabilities: # Devices capabilities for /dev/video0 Video Capture Streaming Extended Pix Format # Devices capabilities for /dev/video1 Metadata Capture Streaming Extended Pix Format I get similar results for the USB webcams. So after all, maybe what Skype and guvcview fail to do is to only list video devices that support the Video Capture device capability.
The second device provides metadata about the video data from the first device. The new devices were introduced by this patch: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=088ead25524583e2200aa99111bea2f66a86545a More information on the V4L metadata interface can be found here: https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/dev-meta.html For run of the mill USB Video Class devices, this mostly just provides more accurate timestamp information . For cameras like Intel's RealSense line, provide a wider range of data about how the image was captured . Presumably this data was split out into a separate device node because it couldn't easily be delivered on the primary device node in a compatible way. It's a bit of a pain though, since (a) applications that don't care about this metadata now need to filter out the extra devices, and (b) applications that do care about the metadata need a way to tie the two devices together.
{ "source": [ "https://unix.stackexchange.com/questions/512759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247149/" ] }
512,953
File1: 123 234 345 456 File2: 123 234 343 758 Expected output: File3: TRUE TRUE FALSE FALSE so the code should compare two files and print 'TRUE' if it matches otherwise it should print 'FALSE' in the new file. Could anyone please provide the solution for this?
Use diff command as following, in bash or any other shell that supports <(...) process substitutions or you can emulate it as shown here : diff --new-line-format='FALSE'$'\n' \ --old-line-format='' \ --unchanged-line-format='TRUE'$'\n' \ <(nl file1) <(nl file2) Output would be: TRUE TRUE FALSE FALSE --new-line-format='FALSE'$'\n' , print FALSE if lines were differ and with --old-line-format='' we disable output if line was differ for file1 which is known as old file to diff command (We could swap these as well, meaning that one of them should print FALSE another should be disabled.) --unchanged-line-format='TRUE'$'\n' , print TRUE if lines were same. the $'\n' C-style escaping syntax is used to printing a new line after each line output.
{ "source": [ "https://unix.stackexchange.com/questions/512953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347601/" ] }
512,957
My work has set up an Ubuntu server for my team to host a project on. They set up a sudo user for me with my name as the username, and a default password. I was able to SSH into the server using these original username and password. Since my whole team would be accessing the server, I wanted to change the username and password so that it wasn't my name, but instead the project name. After doing so, I think I was still able to SSH in using the new username and password. However, now when I try to SSH in I get the error ssh: connect to host xxx.xx.xx.xx port 22: Connection refused . The server was set up so that it can only be accessed from any of our office's networks. Is this sudden refused connection due to me changing the username and password (maybe something to do with the RSA keys, I don't know), or else is it more likely to be an issue with firewalls or my office's network? Edit: Here is a detailed description of the process I took to change the username from 'abc' to 'xyz': While logged in as user 'abc', I tried to run sudo usermod -l xyz abc , but I couldn't as it said process abc is already running . I then created a new user named 'temp' with sudo access. I SSHd in as 'temp', ran sudo usermod -l xyz abc without any errors. I then SSHd in with 'xyz' successfully, deleted user 'temp' and ran passwd to change xyz's password. I'm pretty sure I exited from SSH, then successfully SSHd in again with the new username and password, but I may be wrong here - I can't remember.
Use diff command as following, in bash or any other shell that supports <(...) process substitutions or you can emulate it as shown here : diff --new-line-format='FALSE'$'\n' \ --old-line-format='' \ --unchanged-line-format='TRUE'$'\n' \ <(nl file1) <(nl file2) Output would be: TRUE TRUE FALSE FALSE --new-line-format='FALSE'$'\n' , print FALSE if lines were differ and with --old-line-format='' we disable output if line was differ for file1 which is known as old file to diff command (We could swap these as well, meaning that one of them should print FALSE another should be disabled.) --unchanged-line-format='TRUE'$'\n' , print TRUE if lines were same. the $'\n' C-style escaping syntax is used to printing a new line after each line output.
{ "source": [ "https://unix.stackexchange.com/questions/512957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171446/" ] }
513,657
I have two simple programs: A and B . A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be: ./A | ./B If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer ( A ) and writes to a consumer ( B ). Is that a correct description? Am I missing anything?
The only thing about your question that stands out as wrong is that you say A would run first, then B gets the stdout of A In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A , its writes will block until its output is read (some of it will be buffered by the pipe). The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing across the pipe. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates. The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into [the input of] B "). The shell does the required plumbing to allow this to happen. If you want to use the words "consumer" and "producer", I suppose that's ok too. The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.
{ "source": [ "https://unix.stackexchange.com/questions/513657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/348269/" ] }
513,693
Over the past couple of days I've been trying to dig through directories of files to report on files containing key words. Through the help of other users showing me the correct syntax for grep and sed , I’ve come up with the following solution which has worked well. Unfortunately, it’s a bit repetitive and I'd like to refactor it. echo "<HR><BR><B><h2>Search for Keyword: KeyWord1<BR></B></h2><ol>" >> temp.txt grep -lr -Fiw 'KeyWord1' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txt echo "</ol>" >> temp.txt echo "<HR><BR><B><h2>Search for Keyword: KeyWord2<BR></B></h2><ol>" >> temp.txt grep -lr -Fiw 'KeyWord2' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txt echo "</ol>" >> temp.txt echo "<HR><BR><B><h2>Search for Keyword: KeyWord3<BR></B></h2><ol>" >> temp.txt grep -lr -Fiw 'KeyWord3' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txt echo "</ol>" >> temp.txt I would like to produce the same results as the output generated by the above, but eliminate the clutter. Please advise on how to properly achieve something like the following: Var myList = "KeyWord1, KeyWord2, KeyWord3" while myList; Do echo "<HR><BR><B><h2>Search for Keyword: $myList<BR></B></h2><ol>" >> temp.txt grep -lr -Fiw '$myList' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txt echo "</ol>" >> temp.txt ; done
The only thing about your question that stands out as wrong is that you say A would run first, then B gets the stdout of A In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A , its writes will block until its output is read (some of it will be buffered by the pipe). The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing across the pipe. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates. The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into [the input of] B "). The shell does the required plumbing to allow this to happen. If you want to use the words "consumer" and "producer", I suppose that's ok too. The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.
{ "source": [ "https://unix.stackexchange.com/questions/513693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271874/" ] }
514,078
What is /etc/mtab in Linux? Why is it needed and advantages of having it?
% file /etc/mtab /etc/mtab: symbolic link to ../proc/self/mounts % file /proc/mounts /proc/mounts: symbolic link to self/mounts % /etc/mtab is a compatibility mechanism. Decades ago, Unix did not have a system call for reading the existing mount information. Instead, programs that mounted filesystems were expected to coöperatively and voluntarily maintain a table in /etc/mtab of what was mounted where. For obvious reasons, this was not an ideal mechanism. Linux gained the notion of a "procfs", and one of the things that it gained was a kernel-maintained version of this table, in the form of a mounts pseudo-regular file. The "system call" to read the mount information out of the kernel became an open-read-close sequence against that file, followed by parsing the result from human-readable to machine-readable form (something that has some subtle catches, as you can see from the bug reports from just over a fortnight ago). /etc/mtab thus has popularly become a symbolic link to /proc/mounts , allowing programs that had hardwired that name to keep reading a mount table from that file, which the programs that mounted and unmounted filesystems no longer have to explicitly do anything themselves to keep up to date. (Some of them still will, though, if /etc/mtab turns out to be a writable regular file. And there are a few corner cases where the normalized information in mounts that lacks all non-kernel stuff is not quite what is needed; although they do not outweigh the general problems with /etc/mtab .) Each process can nowadays have its own individual view of what is mounted, and there are as a consequence now individual mounts files for each process in the procfs, each process's own table being accessible to it via the self symbolic link as self/mounts , and /proc/mounts is also now a compatibility mechanism. (Interestingly, neither per-process mounts nor the format of mounts are documented in the current Linux doco, although the similar mountinfo pseudo-regular file is.) SunOS/Solaris has a similar mechanism. The /etc/mnttab file is actually a single-file filesystem, and in addition to reading the table, via an open file descriptor to that file, with the read() system call, one can watch for mount point changes with poll() and obtain various further pieces of information with ioctl() . In HP-UX, /etc/mnttab is likewise the name of the file, but as of version 11 it was still a regular file whose contents were coöperatively maintained by the system utility programs. AIX does not export a human-readable text table that programs have to parse, and there is no equivalent file. The BSDs, similarly, have fully-fledged system calls, getfsstat() on FreeBSD and OpenBSD, for programs to obtain the mount table from the kernel in machine-readable form without marshalling it through a human-readable intermediate form. Further reading Zygmunt Krynicki (2019-03-16). \r in path confuses mount units . #12018. systemd issues. Zbigniew Jędrzejewski-Szmek (2019-04-04). [df] incorrect parsing of /proc/self/mountinfo with \r in mount path . #35137. GNU coreutils bugs. /proc/mounts . Documentation/filesystems/proc.txt . Linux 5.1. Jonathan de Boyne Pollard (2019-02-28). Re: what is the purpose of fstab-decode . Bug #567071. Debian bugs. getfsstat() . FreeBSD System Calls Manual . 2016-12-27.
{ "source": [ "https://unix.stackexchange.com/questions/514078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61379/" ] }
515,895
What is the difference between a[bc]d and a{b,c}d ? Why do people use a{b,c}d when there is already a[bc]d ?
The two are quite different. a[bc]d is a filename pattern (in shells other than fish ). It will expand to the two filenames abd and acd if those are names of existing files in the current directory. The [...] part is a bracketed expression that matches a single character out of the ones listed (or collating elements when ranges are included). To match the pattern a[bc]d , the character between the strings a and d in a filename must be either a b or a c . If abd exists, but acd does not, then it would only expand to abd , and vice versa. If neither abd , nor acd exist, depending on the shell and the options, it would trigger an error (original Unix sh , (t)csh , zsh , fish , bash -O failglob ) and possibly exit the shell, or leave the pattern unexpanded¹ (Bourne-like and rc -like shells) or expand to nothing ( bash/zsh/yash -o nullglob , some older versions of fish , original Unix sh and (t)csh if there are other matching globs in the same command). a{b,c}d is a brace expansion (in shells that support these). It will expand to the two strings abd and acd . The {...} part is a comma-delimited set of strings (in this example; in some shell, it may also be a range such as a..k or 20..25 or more advanced ones like 00..20..2 or 0..20..2%02d ), and the expansion is computed by combining each of these strings with the flanking strings a and d . These strings could be longer than a single character and could also be brace expansions themselves. The expansion happens regardless of whether these strings corresponds to existing filenames or not. If you are constructing strings, use a brace expansion. If you are matching filenames, use a filename pattern. ¹ In this particular case, a[bc]d could happen to be the name of an existing file which is why it's potentially dangerous to use things like rm -f ./*.[ch] in those shells and rm -f ./*.{c,h} is less of a problem.
{ "source": [ "https://unix.stackexchange.com/questions/515895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
516,870
Is there any way to add custom commands to /bin ? For example, I use docker container ls a lot, and would like to turn this into a shortcut command, like dcls . If I add a file named dcls to /bin and inside the file, specify the exact command docker container ls , I assume this wouldn't work. What is the right way, if there is one, to do something like this?
An easy way for a shortcut is to define an alias alias dcls='docker container ls' This will execute docker container ls when you enter dcls and the command alias lists your defined aliases. To remove this alias use unalias dcls . If you use bash, you can save the alias in your ~/.bashrc or ~/.bash_aliases . If your ~/.bash_aliases is not read on startup, you can add this line to your ~/.bashrc : [ -f ~/.bash_aliases ] && . ~/.bash_aliases
{ "source": [ "https://unix.stackexchange.com/questions/516870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351127/" ] }
516,925
I don't want to remove the service, I just want to avoid its start on boot. I still need the option to start it manually later (with the systemctl start <service> command). I tried to use systemctl disable <service> . It doesn't work, because it removes the service. There is another possibility. In its service file, [Install] #WantedBy=multi-user.target could be commented out (and then, systemctl daemon-reload ). It works in the case of my own services, because their service files was written by me. However, the service files belonging to distribution, are in /lib/systemd/system . Files in this directory are managed by the OS, i.e. they would be overwritten by updates, other parts of the system might assume that these are unmodified, and so on. Simply editing system files out of the /etc is a bad practice, and I don't want to do that. I don't want to edit configuration files in my /lib . What to do?
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS) $ sudo systemctl disable unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8) $ sudo systemctl start unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "source": [ "https://unix.stackexchange.com/questions/516925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52236/" ] }
516,931
How do I install Wine on Fedora 20? Other versions of Fedora or other Linuces are not an option. I would prefer to use a package manner, but can also build from source
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS) $ sudo systemctl disable unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8) $ sudo systemctl start unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "source": [ "https://unix.stackexchange.com/questions/516931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9320/" ] }
516,949
Problem I have a script to be run at 1:32 am, so I set a cronjob by $ crontab -e And in the editting file, I have 32 1 * * * /home/user/.scripts/midnightjobs where "user" is my user name. However, it did not work. Attempts I made I tried adding a logging function in my script, and hoped to see what was wrong. It seems like the script never has run. I also tried adding another cronjob at 7:59am: 0 8 * * * /home/user/.scripts/midnightjobs And it works! The script ran, and did output a log file at 8 am. My guess I believe I have been very careful.. and based on my second attempt, my best guess is that my laptop (running on an archlinux) secretly falls asleep at nights, failing to run the cronjob.
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS) $ sudo systemctl disable unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8) $ sudo systemctl start unbound $ sudo systemctl status unbound ● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "source": [ "https://unix.stackexchange.com/questions/516949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291142/" ] }
518,266
I am running Arch Linux on a Raspberry Pi. Suddenly: I am unable to ping to a website. I am unable to access a website from the browser. I have two more computers (all running Arch Linux) connected to the Internet, where I can ping and use the Internet. Also, /etc/resolv.conf is identical on the other computers: nameserver 10.230.252.252 nameserver 203.147.88.2 nameserver 8.8.8.8 search domain.name I can use VNC. I can also ping to 8.8.8.8. When trying to access DuckDuckGo on Chromium I get: This site can’t be reached duckduckgo.com’s server IP address could not be found. DNS_PROBE_FINISHED_NXDOMAIN I have an active Internet connection. What's wrong?
Although I've never had problem with my other x86_64 PC all running Arch Linux, this frequently happens till date with Arch Linux ARM when running NetworkManager. The problem is like you are connected to wifi, but you can't ping or use the internet but you can access all the computers on the local network, and even use remote desktop sharing software. There is a high chance that something went wrong while your ping or your browser tries to resolve the host. I can think of 3 solutions: Solution 1 I believe this is a problem on the thousands of the Raspberry Pi systems running Archlinux ARM and using NetworkManger. In my case /etc/resolv.conf was a broken symlink to ../run/systemd/resolve/stub-resolv.conf . NetworkManager can't populate the symlink, and the /etc/resolv.conf is empty. We have to: Remove the broken symlink: # rm /etc/resolv.conf Create an /etc/NetworkManager/conf.d/dns.conf file with the contents: [main] dns=none main.systemd-resolved=false Restart NetworkManager: sudo systemctl restart NetworkManager This should fix the issue, if not follow Solution 2. Solution 2 In case the above didn't fix the issue for you, you can temporarily populate /etc/resolv.conf by: sudo systemctl restart systemd-resolved && sudo systemctl stop systemd-resolved The reason this works is because probably something is messing up the /etc/resolv.conf file. The above command should overwrite the contents, but again, you should look at what causing the issue. Solution 3 If you can't get your /etc/resolv.conf back, just create a new /etc/resolv.conf (delete if an empty old one or symbolic link exists) and paste the code: search domain.name nameserver 8.8.8.8 nameserver 1.1.1.1 nameserver 1.0.0.1 Note, in the first line, you can also use your router's IP address, for example ( nameserver 192.168.43.1 in my case) which will make other systems pingable on the same network. It's not a good idea to generate resolv like this, but I had a bad time with the NetworkManager's auto-generated resolv. Systemd-resolvd also generates wrong ones, even on my PC. A bit weird, here I am using google's primary dns and cloudflare's primary dns, you can use 8.8.8.8 with 8.8.4.4 or 1.1.1.1 with 1.0.0.1. Although that step works, but you may want to stop NetworkManager from overwriting the file whenever it restarts: Add this entry to /etc/NetworkManager/NetworkManager.conf [main] dns=none systemd-resolved=false They worked for my installations on Raspberry Pi 3 model B. Hope this will work for you, too.
{ "source": [ "https://unix.stackexchange.com/questions/518266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
518,655
How can I achieve cmd >> file1 2>&1 1>>file2 That is, the stdout and stderr should redirect to one file (file1) and only stdout (file2) should redirect to another (both in append mode)?
Problem is that when you redirect your output, it's not available anymore for the next redirect. You can pipe to tee in a subshell to keep the output for the second redirection: ( cmd | tee -a file2 ) >> file1 2>&1 or if you like to see the output in terminal: ( cmd | tee -a file2 ) 2>&1 | tee -a file1 To avoid adding the stderr of the first tee to file1 , you should redirect the stderr of your command to some file descriptor (e.g. 3), and later add this to stdout again: ( 2>&3 cmd | tee -a file2 ) >> file1 3>&1 # or ( 2>&3 cmd | tee -a file2 ) 3>&1 | tee -a file1 (thanks @fra-san)
{ "source": [ "https://unix.stackexchange.com/questions/518655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352141/" ] }
519,315
I want to use printf to print a variable. It might be possible that this variable contains a % percent sign. Minimal example: $ TEST="contains % percent" $ echo "${TEST}" contains % percent $ printf "${TEST}\n" bash: printf: `p': invalid format character contains $ ( echo provides the desired output.)
Use printf in its normal form: printf '%s\n' "${TEST}" From man printf : SYNOPSIS printf FORMAT [ARGUMENT]... You should never pass a variable to the FORMAT string as it may lead to errors and security vulnerabilities. Btw: if you want to have % sign as part of the FORMAT, you need to enter %% , e.g.: $ printf '%d%%\n' 100 100%
{ "source": [ "https://unix.stackexchange.com/questions/519315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296048/" ] }
519,469
I'm trying to recursively search a string with grep but I get this: $ grep -r "stuff" * grep: unrecognized option '---corporate-discount.csv' Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. How can I prevent Bash from passing files starting with - as argument?
First, note that the interpretation of arguments starting with dashes is up to the program being started, grep or other. The shell has no direct way to control it. Assuming you want to process such files (and not ignore them completely), grep , along with most programs, recognizes -- as indicating the end of options, so grep -r -e "stuff" -- * will do what you want. The -e is there in case stuff starts with a - as well. Alternatively, you can also use: grep -r -e "stuff" ./* That latter one would also avoid the problem if there was a file called - in the current directory. Even after the -- separator, grep interprets - as meaning stdin, while ./- is the file called - in the current directory.
{ "source": [ "https://unix.stackexchange.com/questions/519469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353460/" ] }
519,782
I start off in empty directory. $ touch aFile $ ls aFile Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output . I use >> in order to avoid writing simultaneously. $ ls aFile not_exist >>output 2>>output $ cat output ls: cannot access 'not_exist': No such file or directory aFile Which seems to work. Are there any dangers to this approach?
No, it's not just as safe as the standard >>bar 2>&1 . When you're writing foo >>bar 2>>bar you're opening the bar file twice with O_APPEND , creating two completely independent file objects[1], each with its own state (pointer, open modes, etc). This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object. Now, there's a problem with that: O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition. You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason. [1] "Open File Descriptions" in POSIX lingo.
{ "source": [ "https://unix.stackexchange.com/questions/519782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353638/" ] }
519,914
VAR=a,b,c,d # VAR=$(echo $VAR|tr -d '\n') echo "[$VAR]" readarray -td, ARR<<< "$VAR" declare -p ARR Result: [a,b,c,d] declare -a ARR=([0]="a" [1]="b" [2]="c" [3]=$'d\n') How can I tell readarray not to add the final newline \n ? What is the meaning of the latest $ symbol?
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR") declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "source": [ "https://unix.stackexchange.com/questions/519914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199331/" ] }
519,920
How to do I get the list days were no files been received. Have used the below command to get the file count along with the date: find . -maxdepth 1 -type f -printf '%TY-%Tm-%Td\n' | awk '{array[$0]+=1}END{ for(val in array) print val" "array[val] }'|sort Output: 2019-05-09 1 2019-05-10 3 2019-05-13 2 2019-05-14 5 2019-05-15 1 2019-05-16 2 2019-05-17 1 2019-05-20 2 I would need the missing days count as 0 as well. For eg: 2019-05-12 0
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR") declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "source": [ "https://unix.stackexchange.com/questions/519920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353883/" ] }
519,921
Using "parents" command works but it copies complete directory structure, whereas I need to preseve only the last most directory find /tmp/data/ -type f -name "*.txt" -exec cp --parents {} /u01/ABC/ \; Output that I get: /u01/ABC/tmp/data/a/1.txt /u01/ABC/tmp/data/b/1.txt /u01/ABC/tmp/data/c/1.txt Output that I need: /u01/ABC/a/1.txt /u01/ABC/b/1.txt /u01/ABC/c/1.txt
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR") declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "source": [ "https://unix.stackexchange.com/questions/519921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353852/" ] }
520,231
I've recently begun supporting Linux installed on devices with built-in nvme ssds. I noticed the device files had an extra number, beyond a number identifying the drive number and the partition number. IDE/SATA/SCSI drives normally only have a drive letter and partition number. For example: /dev/nvme0n1p2 I got to wondering what the n1 part was, and after a bit of searching, it looks like that identifies an nvme 'namespace'. The definitions for it were kind of vague: "An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks." So, does this act like a partition that is defined at the hardware controller level, and not in an MBR or GPT partition table? Can a namespace span multiple physical nvme ssd's? E.g. can you create a namespace that pools together storage from multiple ssd's into a single logical namespace, similar to RAID 0? What would you do with an NVME namespace that you can't already achieve using partition tables or LVM or a filesystem that can manage multiple volumes (like ZFS, Btrfs, etc)? Also, why does it seem like the namespace numbering starts at 1 instead of 0? Is that just something to do with how NVME tracks the namespace numbers at a low level (e.g. partitions also start at 1, not 0, because that is how the standard for partition numbers was set, so the Linux kernel just uses whatever the partition number that is stored on disk is - I guess nvme works the same way?)
In NVM Express and related standards, controllers give access to storage divided into one or more namespaces. Namespaces can be created and deleted via the controller, as long as there is room for them (or the underlying storage supports thin provisioning), and multiple controllers can provide access to a shared namespace. How the underlying storage is organised isn’t specified by the standard, as far as I can tell. However typical NVMe SSDs can’t be combined, since they each provide their own storage and controller attached to a PCI Express port, and the access point is the controller, above namespaces — thus a namespace can’t group multiple controllers (multiple controllers can provide access to a shared namespace). It’s better to think of namespaces as something akin to SCSI LUNs as used in enterprise storage (SANs etc.). Namespace numbering starts at 1 because that’s how per-controller namespace identifiers work. Namespaces also have longer, globally-unique identifiers. Namespaces can be manipulated using the nvme command, which provides support for low-level NVMe features including: formatting, which performs a low-level format and allows various features to be used (secure erase, LBA format selection...); attaching and detaching, which allows controllers to be attached to or detached from a namespace (if they support it and the namespace allows it). Attaching and detaching isn’t something you’ll come across in laptop or desktop NVMe drives. You’d use it with NVMe storage bays such as those sold by Dell EMC, which replace the iSCSI SANs of the past. See the NVM Express standards for details (they’re relatively easy to read), and this NVM Express tutorial presentation for a good introduction.
{ "source": [ "https://unix.stackexchange.com/questions/520231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58182/" ] }
523,079
I just installed build-essential , which installed gcc-8 but the man pages seem to be unavailable. $ man gcc No manual entry for gcc See 'man 7 undocumented' for help when manual pages are not available. Moreover, I can see that the man pages aren't provided by gcc-8 (or gcc-7 ), # dpkg -L gcc-8 | grep -i man /usr/share/man /usr/share/man/man1 /usr/share/man/man1/x86_64-linux-gnu-gcc-ar-8.1.gz /usr/share/man/man1/x86_64-linux-gnu-gcc-nm-8.1.gz /usr/share/man/man1/x86_64-linux-gnu-gcc-ranlib-8.1.gz /usr/share/man/man1/gcc-ar-8.1.gz /usr/share/man/man1/gcc-nm-8.1.gz /usr/share/man/man1/gcc-ranlib-8.1.gz I'm quite sure previously there was a man gcc . I'm using Debian 10.0 Buster (testing).
The manpages are provided in contrib packages, gcc-doc etc. (See the links at the top-right of the linked page for all the releases where the package is available.) Debian 10’s default compiler is GCC 8. The GCC 8 documentation wasn’t packaged in time for Debian 10’s release , but it is available in backports , along with the corresponding gcc-doc package . To install it, you need to enable backports with contrib and non-free , and install it from there explicitly: echo deb http://deb.debian.org/debian buster-backports main contrib non-free | sudo tee /etc/apt/sources.list.d/buster-backports.list sudo apt update sudo apt install -t buster-backports gcc-doc Note though, even with gcc-doc you may want to install manpages-posix-dev for access to POSIX docs on ISO C standard library docs.
{ "source": [ "https://unix.stackexchange.com/questions/523079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
523,095
Yesterday, my laptop with OpenSuse 15.1 died after a couple of years of use. I went out and bought a new HP Spectre X360 15 laptop ( 15-DF0033DX ) that has 16GB of RAM, 1 256GB SSD NVMe drive, and don't even know what video it has since I am not super interested in that. I went into the BIOS to make sure the boot order is: 1) CD/DVD Boot 2) USB Boot 3) HDD Boot I also made sure that Legacy Boot is disabled and Secured Boot is disabled. There are 3 partitions on this drive: EFI Boot 260MB, C:\ with 475 GB of space, and a 27GB Restoration Drive. The C:\ drive had BitLocker on it for encryption, but I turned that off. This machine also has some sort of Intel Optane Memory and Storage management. I have created a DVD for latest/greatest OpenSuse Leap 15.1 and I can boot to the installation. I get past half the installation where I can choose the KDE Desktop UI and then the next step is the Partitioning. However, it is at this point that the OpenSuse says there is an error with the system and that it says: cannot delete mdContainer For the life of me, I cannot even find these words anywhere on the internet, and I have spent literally the last 24 hours looking. The drive partitions it shows me are: 475 GB nvme0 and 27 GB nvme1. When I go to create new partitions, it tells me that the drive is in use ... what could be using it and locking it like that?????? I've never had this issue before. I tried to create a partition on the larger drive: /efi/boot 1GB FAT / 20GB BRTFS /home 475GB EXT4 I get the error that it tells me the device is being used, and I don't get to see what is using the drive ... just that it is being used? Then I set my username/password, the system username/password, then I go with the default install for software since I will change it later ... and when I go to install ... the screen goes blank and then it starts the DOS-UI version of the installation ... and I go through the whole process all over again. I tried to find an answer on the OpenSuse Forums, and I created a new account there, but I am unable to verify my registration because of problems on their side. So, I have come here as a last resort. The next step might be to just delete the partitions in Windows 10 to just get rid of them and then hopefully that would be enough to do it. So, I am looking to find out what this error message means: cannot delete mdContainer How can I fix this, if I can, and then install opensuse. I am also going to try installing Fedora on this machine if I can. Other sites I went to said they were able to install 5 different versions of Linux on the HP Spectre x360 laptop. So, maybe my system has a legitimate issue, and I need to exchange it?
The manpages are provided in contrib packages, gcc-doc etc. (See the links at the top-right of the linked page for all the releases where the package is available.) Debian 10’s default compiler is GCC 8. The GCC 8 documentation wasn’t packaged in time for Debian 10’s release , but it is available in backports , along with the corresponding gcc-doc package . To install it, you need to enable backports with contrib and non-free , and install it from there explicitly: echo deb http://deb.debian.org/debian buster-backports main contrib non-free | sudo tee /etc/apt/sources.list.d/buster-backports.list sudo apt update sudo apt install -t buster-backports gcc-doc Note though, even with gcc-doc you may want to install manpages-posix-dev for access to POSIX docs on ISO C standard library docs.
{ "source": [ "https://unix.stackexchange.com/questions/523095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181979/" ] }
523,327
I have the Unicode character ᚠ, represented by its Unicode code point 16A0, in a text file (the text file is encoded(?) as utf-8). When I do grep '\u16A0' test.txt I get no result. How do I grep that character?
You can use ANSI-C quoting provided by your shell, to replace backslash-escaped characters as specified by the ANSI C standard. This should work for any command, not just grep , in shells like Bash and Zsh: grep $'\u16A0' For some more complex examples, you might refer to this related question and its answers.
{ "source": [ "https://unix.stackexchange.com/questions/523327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/356586/" ] }
524,254
In my copy of the conda.sh script, I see the following lines: if [ -n "${_CE_CONDA}" ] && [ -n "${WINDIR+x}" ]; then SYSP=$(\dirname "${CONDA_EXE}") else SYSP=$(\dirname "${CONDA_EXE}") SYSP=$(\dirname "${SYSP}") fi I am curious as to why there is a backslash in front the the d in dirname . I do not believe it is necessary. This use of backslashes also appears in other places in the source file. Is there a reason for doing this that I am missing?
Backslash will suppress alias expansion, ie it executes the original command and makes sure that alias version does not run. Scripts can unknowingly run with alias expansion when the system has set shopt -s expand_aliases (BASH only) or if it is executed using source . ./conda.sh # usually no alias expansion (unless `shopt -s expand_aliases` in BASH) source ./conda.sh # alias expansion . ./conda.sh # alias expansion Some sysadmins like to put backslash in everything as a preventive measure against side-effects of aliases, just in case it was aliased unintentionally somewhere else and the alias gets expanded as explained previously. For example, if the system has set this alias dirname='dirname -z' somewhere and the condition allows the alias to be expanded, then a script that tries to call dirname will unfortunately call dirname -z instead, which was not the script intended. If there's certainty that such alias do not exist, we can remove all the backslash and it should work fine. Alternatively, one can use command instead of backslash version to suppress alias. Thus, instead of \dirname , one can use command dirname , which might look more readable. (For built-in commands like cd , one should use builtin instead). I prefer this instead, as it also bypasses function with same name as well as any aliases.
{ "source": [ "https://unix.stackexchange.com/questions/524254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209133/" ] }
524,506
I'm trying to write a function to replace the functionality of the exit builtin to prevent myself from exiting the terminal. I have attempted to use the SHLVL environment variable but it doesn't seem to change within subshells: $ echo $SHLVL 1 $ ( echo $SHLVL ) 1 $ bash -c 'echo $SHLVL' 2 My function is as follows: exit () { if [[ $SHLVL -eq 1 ]]; then printf '%s\n' "Nice try!" >&2 else command exit fi } This won't allow me to use exit within subshells though: $ exit Nice try! $ (exit) Nice try! What is a good method to detect whether or not I am in a subshell?
In bash, you can compare $BASHPID to $$ $ ( if [ "$$" -eq "$BASHPID" ]; then echo not subshell; else echo subshell; fi ) subshell $ if [ "$$" -eq "$BASHPID" ]; then echo not subshell; else echo subshell; fi not subshell If you're not in bash, $$ should remain the same in a subshell, so you'd need some other way of getting your actual process ID. One way to get your actual pid is sh -c 'echo $PPID' . If you just put that in a plain ( … ) it may appear not to work, as your shell has optimized away the fork. Try extra no-op commands ( : ; sh -c 'echo $PPID'; : ) to make it think the subshell is too complicated to optimize away. Credit goes to John1024 on Stack Overflow for that approach.
{ "source": [ "https://unix.stackexchange.com/questions/524506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
524,760
I'm basically looking for files then sorting by the size. The script works if I don't sort the size by human readable. But I want the size to be human readable. How can I sort sizes that are human readable? For example: ls -l | sort -k 5 -n | awk '{print $9 " " $5}' This works as expected, I got the size of my files in bytes ascending: 1.txt 1 test.txt 3 bash.sh* 573 DocGeneration.txt 1131 andres_stuff.txt 1465 Branches.xlsx 15087 foo 23735 bar 60566 2016_stuff.pdf 996850 Now, I want the size to be human readable, so I added an -h parameter to ls, and now some files are out of order: ls -lh | sort -k 5 -n | awk '{print $9 " " $5}' 1.txt 1 DocGeneration.txt 1.2K andres_stuff.txt 1.5K test.txt 3 Branches.xlsx 15K foo 24K bar 60K bash.sh* 573 2016_stuff.pdf 974K
Try sort -h k2 -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) It is part of gnu sort, BSD sort, and others.
{ "source": [ "https://unix.stackexchange.com/questions/524760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/357731/" ] }
524,963
I'm aware of the methods where you can run a Bash for loop and ping multiple servers, is there a Linux CLI tool that I can use which will allow for me to do this without having to resort to writing a Bash script to ping a list of servers one at a time? Something like this: $ ping host1 host2 host3 NOTE: I'm looking specifically for CentOS/Fedora, but if it works on other distros that's fine too.
If you look into the NMAP project you'll find that it includes additional tools on top of just nmap . One of these tools is nping , which includes the following ability: Nping has a very flexible and powerful command-line interface that grants users full control over generated packets. Nping's features include: Custom TCP, UDP, ICMP and ARP packet generation. Support for multiple target host specification. Support for multiple target port specification. ... nping is in the standard EPEL repos to boot. $ repoquery -qlf nmap.x86_64 | grep nping /usr/bin/nping /usr/share/man/man1/nping.1.gz Usage To ping multiple servers you merely have to tell nping the names/IPs and which protocol you want to use. Here since we want to mimic what the traditional ping CLI does we'll use ICMP. $ sudo nping -c 2 --icmp scanme.nmap.org google.com Starting Nping 0.7.70 ( https://nmap.org/nping ) at 2019-06-14 13:43 EDT SENT (0.0088s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=1] IP [ttl=64 id=57921 iplen=28 ] RCVD (0.0950s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=1] IP [ttl=46 id=24195 iplen=28 ] SENT (1.0091s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ] SENT (2.0105s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ] RCVD (2.0107s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=2] IP [ttl=46 id=24465 iplen=28 ] SENT (3.0138s) ICMP [10.3.144.95 > 64.233.177.100 Echo request (type=8/code=0) id=49169 seq=2] IP [ttl=64 id=57921 iplen=28 ] Statistics for host scanme.nmap.org (45.33.32.156): | Probes Sent: 2 | Rcvd: 2 | Lost: 0 (0.00%) |_ Max rtt: 86.053ms | Min rtt: 0.188ms | Avg rtt: 43.120ms Statistics for host google.com (64.233.177.100): | Probes Sent: 2 | Rcvd: 0 | Lost: 2 (100.00%) |_ Max rtt: N/A | Min rtt: N/A | Avg rtt: N/A Raw packets sent: 4 (112B) | Rcvd: 2 (108B) | Lost: 2 (50.00%) Nping done: 2 IP addresses pinged in 3.01 seconds The only drawback I've found with this tool is the use of ICMP mode requiring root privileges. $ nping -c 2 --icmp scanme.nmap.org google.com Mode ICMP requires root privileges.
{ "source": [ "https://unix.stackexchange.com/questions/524963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
525,013
I have a purge process that I am running and the command I'm using is : find $sentPurgerFolder -mtime +7 -print -delete >> $sentPurgeLogFile 2>&1 This code is in a while loop that is looping thru clients that have multiple folders of data to purge. The intent is to send all the purge info, regardless of the number of folders for that client to one log-file for that client. And that part seems to work pretty good. My disconnect is I would like to send the same output to a master logfile, however the examples of 'tee' that I have seen give me pause. I do not know how I would integrate that command into my code here without doubling up the log data. Can anyone lend some insight or make a suggestion?
If you look into the NMAP project you'll find that it includes additional tools on top of just nmap . One of these tools is nping , which includes the following ability: Nping has a very flexible and powerful command-line interface that grants users full control over generated packets. Nping's features include: Custom TCP, UDP, ICMP and ARP packet generation. Support for multiple target host specification. Support for multiple target port specification. ... nping is in the standard EPEL repos to boot. $ repoquery -qlf nmap.x86_64 | grep nping /usr/bin/nping /usr/share/man/man1/nping.1.gz Usage To ping multiple servers you merely have to tell nping the names/IPs and which protocol you want to use. Here since we want to mimic what the traditional ping CLI does we'll use ICMP. $ sudo nping -c 2 --icmp scanme.nmap.org google.com Starting Nping 0.7.70 ( https://nmap.org/nping ) at 2019-06-14 13:43 EDT SENT (0.0088s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=1] IP [ttl=64 id=57921 iplen=28 ] RCVD (0.0950s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=1] IP [ttl=46 id=24195 iplen=28 ] SENT (1.0091s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ] SENT (2.0105s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ] RCVD (2.0107s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=2] IP [ttl=46 id=24465 iplen=28 ] SENT (3.0138s) ICMP [10.3.144.95 > 64.233.177.100 Echo request (type=8/code=0) id=49169 seq=2] IP [ttl=64 id=57921 iplen=28 ] Statistics for host scanme.nmap.org (45.33.32.156): | Probes Sent: 2 | Rcvd: 2 | Lost: 0 (0.00%) |_ Max rtt: 86.053ms | Min rtt: 0.188ms | Avg rtt: 43.120ms Statistics for host google.com (64.233.177.100): | Probes Sent: 2 | Rcvd: 0 | Lost: 2 (100.00%) |_ Max rtt: N/A | Min rtt: N/A | Avg rtt: N/A Raw packets sent: 4 (112B) | Rcvd: 2 (108B) | Lost: 2 (50.00%) Nping done: 2 IP addresses pinged in 3.01 seconds The only drawback I've found with this tool is the use of ICMP mode requiring root privileges. $ nping -c 2 --icmp scanme.nmap.org google.com Mode ICMP requires root privileges.
{ "source": [ "https://unix.stackexchange.com/questions/525013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/355563/" ] }
525,243
To my understanding, for manipulating files there is only the sys_write syscall in Linux, which overwrites the file content (or extends it, if at the end). Why are there no syscalls for inserting or deleting content in files in Linux? As all current file systems do not require the file to be stored in a continuous memory block, an efficient implementation should be possible. (The files would get fragmented.) With file system features as "copy on write" or "transparent file compression", the current way of inserting content seems to be very inefficient.
On recent Linux systems that is actually possible, but with block (4096 most of the time), not byte granularity, and only on some filesystems (ext4 and xfs). Quoting from the fallocate(2) manpage: int fallocate(int fd, int mode, off_t offset, off_t len); [...] Collapsing file space Specifying the FALLOC_FL_COLLAPSE_RANGE flag (available since Linux 3.15) in mode removes a byte range from a file, without leaving a hole. The byte range to be collapsed starts at offset and continues for len bytes. At the completion of the operation, the contents of the file starting at the location offset+len will be appended at the location offset , and the file will be len bytes smaller. [...] Increasing file space Specifying the FALLOC_FL_INSERT_RANGE flag (available since Linux 4.1) in mode increases the file space by inserting a hole within the file size without overwriting any existing data. The hole will start at offset and continue for len bytes. When inserting the hole inside file, the contents of the file starting at offset will be shifted upward (i.e., to a higher file offset) by len bytes. Inserting a hole inside a file increases the file size by len bytes.
{ "source": [ "https://unix.stackexchange.com/questions/525243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/358092/" ] }
525,653
When trying to call /dev/tcp/www.google.com/80 , by typing /dev/tcp/www.google.com/80 Bash says no such file or directory . When looking at other people's code online, they use syntax such as 3<>/dev/tcp/www.google.com/80 I noticed that this works as well: </dev/tcp/www.google.com/80 Why are these symbols required to call certain things in bash?
Because that's a feature of the shell (of ksh, copied by bash), and the shell only. /dev/tcp/... are not real files, the shell intercepts the attempts to redirect to a /dev/tcp/... file and then does a socket(...);connect(...) (makes a TCP connection) instead of a open("/dev/tcp/..."...) (opening that file) in that case. Note that it has to be spelled like that. cat < /dev/./tcp/... or ///dev/tcp/... won't work, and will attempt to open those files instead (which on most systems don't exist and you'll get an error). The direction of the redirection also doesn't matter. Whether you use 3< /dev/tcp/... or 3> /dev/tcp/... or 3<> /dev/tcp/... or even 3>> /dev/tcp/... won't make any difference, you'll be able to both read and write from/to that file descriptor to receive/send data over that TCP socket. When you do cat /dev/tcp/... , that doesn't work because cat doesn't implement that same special handling, it does a open("/dev/tcp/...") like for every file (except - ), only the shell (ksh, bash only) does, and only for the target of redirections. That cat - is another example of a file path handled specially, this time, by cat , not the shell. Instead of doing a open("-") and reading the input from the resulting file descriptor, cat reads directly from the file descriptor 0 (stdin). cat and many text utilities do that, the shell doesn't for its redirections. To read the content of the - file, you need cat ./- , or cat < - (or cat - < - ). On systems that don't a have /dev/stdin , bash will however do something similar for redirections from that (virtual) file. GNU awk does the same for /dev/stdin , /dev/stdout , /dev/stderr even on systems that do have such files which can cause some surprises on systems like Linux where those files behave differently. zsh also has TCP (and Unix domain stream) socket support, but that's done with a ztcp (and zsocket ) builtins, so it's less limited than the ksh/bash approach. In particular, it can also act as a server which ksh/bash can't do. It's still much more limited than what you can do in a real programming language though.
{ "source": [ "https://unix.stackexchange.com/questions/525653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/357569/" ] }
525,921
For example {a..c}{1..3} expands to a1 a2 a3 b1 b2 b3 c1 c2 c3 . If I wanted to print a1 b1 c1 a2 b2 c2 a3 b3 c3 , is there an analogous way to do that? What's the simplest way?
You could do: $ eval echo '{a..c}'{1..3} a1 b1 c1 a2 b2 c2 a3 b3 c3 Which then tells the shell to evaluate: echo {a..c}1 {a..c}2 {a..c}3
{ "source": [ "https://unix.stackexchange.com/questions/525921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338440/" ] }
525,935
I use to have a dual system with KDE Neon and Windows 10, all happy together in my lap Asus s510U.  One week ago, the f… Windows decided to update without questioning me. It also changed all my BIOS configuration; this action made it impossible for my Linux to boot. So what I did in sequential order is: Change the BIOS again; now Linux worked fine again. After using both systems (I ca’nt remember how often) without problem, I was using Linux, the next day when I started my computer I had a black screen.  It doesn’t load anything. (I think I have GRUB.) So I have problems to start live Mint USB in UEFI mode, but finally I started it, installed boot-repair, and it supposed to repair it.  (It is important to mention that I have boot in a separate partition of 500 MB.) But when I start again, it stops in the grub menu, without booting Linux. Because it didn’t work, I tried to reinstall grub manually, but I have the same results. I try to did this on the bios… sdb1/EFI/neon/shimx64.efi, but it came in different nomenclature. I found the file shimx64.efi but doesn’t boot at all. Can somebody help me?  I don’t really know how to fix it.  I leave you some info of the boot.repair and of my system configuration. Boot successfully repaired!!! Boot Repair URL: http://paste.ubuntu.com/p/ZnGHZ4HmG5/ http://paste.ubuntu.com/p/ZnGHZ4HmG5/ My disk: sudo fdisk -l Disk /dev/sdb: 119.2 GiB, 128035676160 bytes, 250069680 sectors<br> Units: sectors of 1 * 512 = 512 bytes<br> Sector size (logical/physical): 512 bytes / 4096 bytes<br> I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br> Disklabel type: gpt<br> Disk identifier: xxx Device Start End Sectors Size Type<br> /dev/sdb1 2048 534527 532480 260M EFI System<br> /dev/sdb2 534528 567295 32768 16M Microsoft reserved<br> /dev/sdb3 567296 121028607 120461312 57.5G Microsoft basic data<br> /dev/sdb4 248430592 250068991 1638400 800M Windows recovery environment<br> /dev/sdb5 121028608 122052607 1024000 500M Linux filesystem<br> /dev/sdb6 122052608 126148607 4096000 2G Linux swap<br> /dev/sdb7 126148608 248429857 122281250 58.3G Linux filesystem<br> My boot directory: /mnt/boot$ ls -al total 194116 drwxr-xr-x 5 root root 4096 Jun 19 00:23 . drwxr-xr-x 25 root root 4096 Jun 19 00:23 .. -rw------- 1 root root 4049455 Jan 29 15:39 System.map-4.15.0-45-generic -rw------- 1 root root 4051528 Jun 4 20:33 System.map-4.15.0-52-generic -rw-r--r-- 1 root root 217019 Jan 29 15:39 config-4.15.0-45-generic -rw-r--r-- 1 root root 217278 Jun 4 20:33 config-4.15.0-52-generic drwxr-xr-x 2 root root 4096 Jun 19 00:20 efi drwxr-xr-x 5 root root 4096 Jun 19 00:24 grub drwxr-xr-x 5 root root 4096 Jun 19 00:22 grub.bak -rw-r--r-- 1 root root 57867618 Feb 24 02:26 initrd.img-4.15.0-43-generic -rw-r--r-- 1 root root 57863844 Feb 24 21:43 initrd.img-4.15.0-45-generic -rw-r--r-- 1 root root 57899212 Jun 19 00:23 initrd.img-4.15.0-52-generic -rw------- 1 root root 8281848 Jan 29 16:11 vmlinuz-4.15.0-45-generic -rw------- 1 root root 8294136 Jun 4 20:39 vmlinuz-4.15.0-52-generic Efi: mint@mint:/tmp/boot$ sudo efibootmgr -v BootCurrent: 0005 Timeout: 1 seconds BootOrder: 0001,0000,0003,0002,0004,0005 Boot0000* Windows Boot Manager HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\EFI\MICROSOFT\BOOT\BOOTMGFW.EFI)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}.................... Boot0001* neon HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\EFI\NEON\SHIMX64.EFI) Boot0002* Efi prueba HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\bootx64.efi) Boot0003* Hard Drive BBS(HD,,0x0)..GO..NO........o.T.O.S.H.I.B.A. .M.Q.0.4.A.B.F.1.0.0....................A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . .4. .N.8.P.8.1.A.T.K........BO..NO........o.T.O.S.H.I.B.A. .T.H.N.S.N.K.1.2.8.G.V.N.8....................A...........................>..Gd-.;.A..MQ..L. . . . . . . . .8.4.S.N.0.1.9.K.M.T.T.Y........BO..NO........c.A.D.A.T.A. .U.S.B. .F.l.a.s.h. .D.r.i.v.e. .1.1.0.0....................A.......................6..Gd-.;.A..MQ..L.2.6.8.2.6.2.1.0.0.1.1.7.0.0.1.9........BO Boot0004* linux efi pma HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\grubx64.efi) Boot0005* UEFI: ADATA USB Flash Drive 1100, Partition 1 PciRoot(0x0)/Pci(0x14,0x0)/USB(2,0)/HD(1,MBR,0x70d993e5,0x800,0x1c3d800)..BO In my /etc/fstab was commented the part of mounting the boot partition.. That is very strange because I am sure I dindt do that. My grub.cfg has this: search.fs_uuid a5da64fd-c3bd-4689-a6ef-c5fc1ddd17ac root hd1,gpt7 set prefix=($root)'/boot/grub' configfile $prefix/grub.cfg Which points to the non boot partition. I have 2 diferents boot directory, one of the partition ( the original ) and another under /. Maybe in one update the system changed...
You could do: $ eval echo '{a..c}'{1..3} a1 b1 c1 a2 b2 c2 a3 b3 c3 Which then tells the shell to evaluate: echo {a..c}1 {a..c}2 {a..c}3
{ "source": [ "https://unix.stackexchange.com/questions/525935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/358701/" ] }