source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
339,840
How to start ssh-agent as systemd service? There are some suggestions in the net, but they are not complete. How to add automatically unencrypted keys if ssh-agent service was started successfully? Probably, adding keys from the list of ~/.ssh/.session-keys would be good. How to set SSH_AUTH_SOCK in any login session afterwards? The most correct way is to push it from ssh-agent service to systemd-logind service (have no idea if it's ever possible). The plain naive way is just add it to /etc/profile .
To create a systemd ssh-agent service, you need to create a file in ~/.config/systemd/user/ssh-agent.service because ssh-agent is user isolated. [Unit]Description=SSH key agent[Service]Type=simpleEnvironment=SSH_AUTH_SOCK=%t/ssh-agent.socketExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK[Install]WantedBy=default.target Add SSH_AUTH_SOCK="${XDG_RUNTIME_DIR}/ssh-agent.socket" to ~/.config/environment.d/ssh_auth_socket.conf . Finally enable and start this service. systemctl --user enable --now ssh-agent And, if you are using ssh version higher than 7.2. echo 'AddKeysToAgent yes' >> ~/.ssh/config This will instruct the ssh client to always add the key to a running agent, so there's no need to ssh-add it beforehand. Note that when you create the ~/.ssh/config file you may need to run: chmod 600 ~/.ssh/config or chown $USER ~/.ssh/config Otherwise, you might receive the Bad owner or permissions on ~/.ssh/config error.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87248/" ] }
339,854
I asked this question. While I got the answer (and am beating myself why didn't I try to pipe the output via cat or cat -n as shared by Stephen). Is there a way to make an alias or something so that whenever I run $ dpkg -L $PACKAGENAME I get the listing with line numbers, is that possible?
The alias part: alias dpkgnum='function __dpkgnum { dpkg -L $1 | nl;};__dpkgnum' As noted on comments, including just the function in .bashrc or .bash_aliases file, without alias will also work. function dpkgnum { dpkg -L $1 | nl;}#call it by terminal $: dpkgnum pkgname In this case the function will not be visible as an alias but as a system function and can be listed (among other system vars) with declare .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
339,943
I'm attempting to take the last word or phrase using grep for a specific pattern. In this example, it would be the from the last comma to the end of the line: Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value And so the wanted output for that line would be "Great Value". All the lines are different lengths as well, but always have a single comma preceding the last words. Basically, I would like to simply output from the last comma to the end of the line. Thank you!
Here: grep -o '[^,]\+$' [^,]\+ matches one or more characters that are not , at the end of the line ( $ ) -o prints only the matched portion Example: % grep -o '[^,]\+$' <<<'Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value'Great Value
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/339943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211032/" ] }
339,946
I am making a word list from all the words typed into a program that will be working with the text entered into it. I want to use a stream editor like sed or awk to search the first word of every line in a document and return the line number when a pattern -stored in a variable- is found. I have this much working correctly so far: awk $1 '{print $1 " " NR""}' dictionary.txt | awk '/^**myWord** /' | cut -d" " -f2 However, I cannot figure out how to use a variable in place of "myWord". For example, I get only errors when I use: read=searchWordawk $1 '{print $1 " " NR""}' words.txt | awk '/^**$searchWord** /' | cut -d" " -f2
Here: grep -o '[^,]\+$' [^,]\+ matches one or more characters that are not , at the end of the line ( $ ) -o prints only the matched portion Example: % grep -o '[^,]\+$' <<<'Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value'Great Value
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/339946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212258/" ] }
339,954
The zshcompsys man -page says: INITIALIZATION If the system was installed completely, it should be enough to call the shell function compinit from your initialization file; see the next section. However, the function compinstall can be run by a user to configure various aspects of the completion system. zsh can't find the commands though: genesis% compinitzsh: command not found: compinitgenesis% compinstallzsh: command not found: compinstallgenesis% echo $PATH/home/ravi/bin:/home/ravi/.gem/ruby/2.4.0/bin:/home/ravi/bin:/home/ravi/.gem/ruby/2.4.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/local/heroku/bin:/home/ravi/.cabal/bin:/home/ravi/.local/share/fzf/bin:/usr/local/heroku/bin:/home/ravi/.cabal/bingenesis% I'm lead to ask this question because when starting zsh I get: tmuxinator.zsh:20: command not found: compdef How do I get zsh to find these completion commands?
This is the same issue I got on my mac. I am using zsh shell. Compdef is basically a function used by zsh for load the auto-completions. The completion system needs to be activated. If you’re using something like oh-my-zsh then this is already taken care of, otherwise you’ll need to add the following to your ~/.zshrc autoload -Uz compinitcompinit Completion functions can be registered manually by using the compdef function directly like this compdef . But compinit need to be autoloaded in context before using compdef.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
339,992
I would like to read different lines of a text file to different variables. For example input.txt : line1 foo foobar barline2 barline3 fooline4 foobar bar I want this result to be stored in variables var1 , var2 , var3 and var4 such that var1=line1 foo foobar barvar2=line2 bar and so on. Could someone please tell me how it is done. I tried to use eval in a for loop. It doesn't seem to work.
You'd do: unset -v line1 line2{ IFS= read -r line1 && IFS= read -r line2; } < input.txt Or: { line1=$(line) && line2=$(line); } < input.txt (less efficient as line is rarely built-in and most shells need to fork to implement command substitution. line is also no longer a standard command). To use a loop: unset -v line1 line2 line3for var in line1 line2 line3; do IFS= read -r "$var" || breakdone < input.txt Or to automatically define the names of the variables as line<++n> : n=1; while IFS= read -r "line$n"; do n=$((n + 1))done < input.txt Note that bash supports array variables and a readarray builtin to read lines into an array: readarray -t line < input.txt Note however that contrary to most other shells, bash array indices start at 0 not 1 (inherited from ksh ), so the first line will be in ${line[0]} , not ${line[1]} (though as @Costas has shown , you can make readarray (aka mapfile ) start writing the values at indice 1 ( bash arrays also contrary to most other shells' being sparse arrays) with -O 1 ). See also: Understand "IFS= read -r line"?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212295/" ] }
340,010
I need a script that will create a file with the next file in a sequence. Each execution of the script should only create one file and the script could be run zero or more times on any given day. The files should be named after the current date in format %y%m%d with the second file having -01 appended, the third file to be created on a given date would have -02 etc. For example: 20170125.txt // first file create on the day.20170125-01.txt // 2nd file20170125-02.txt // 3rd file So far I've got this super basic script that creates my first daily file but I'm stumped as to how to do the incremental numbering after that. #! /bin/bashDATE=`date +%Y%m%d`touch "$DATE.txt"
today=$( date +%Y%m%d ) # or: printf -v today '%(%Y%m%d)T' -1number=0fname=$today.txtwhile [ -e "$fname" ]; do printf -v fname '%s-%02d.txt' "$today" "$(( ++number ))"doneprintf 'Will use "%s" as filename\n' "$fname"touch "$fname" today gets today's date, and we initialise our counter, number , to zero and create the initial filename as the date with a .txt suffix. Then we test to see if the filename already exists. If it does, increment the counter and create a new filename using printf . Repeat until we no longer have a filename collision. The format string for the printf , %s-%02d.txt , means "a string followed by a literal dash followed by a zero-filled two-digit integer and the string .txt ". The string and the integer is given as further arguments to printf . The -v fname puts the output of printf into the variable fname . The touch is just there for testing. This will generate filenames like 20170125.txt20170125-01.txt20170125-02.txt20170125-03.txt etc. on subsequent runs.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1731/" ] }
340,013
I have just finished setting up a server on AWS ECS. All of the configurations are identical to a different server I have, except for Apache version (from 2.2 to 2.4) and PHP version (from 5.3 to 5.6). I have modified my index.php file to only printout the php_info(), but I keep getting: Bad Request Your browser sent a request that this server could not understand. Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.4.25 (Amazon) Server at xxx.yyy.com Port 80 I have looked at all the logs saved from my accesses and this is what I get from this specific access: error_log [Tue Jan 24 16:20:46.154208 2017] [suexec:notice] [pid 32139] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Jan 24 16:20:46.249527 2017] [auth_digest:notice] [pid 32146] AH01757: generating secret for digest authentication ... [Tue Jan 24 16:20:46.250415 2017] [lbmethod_heartbeat:notice] [pid 32146] AH02282: No slotmem from mod_heartmonitor [Tue Jan 24 16:20:46.276823 2017] [mpm_prefork:notice] [pid 32146] AH00163: Apache/2.4.25 (Amazon) OpenSSL/1.0.1k-fips configured -- resuming normal operations [Tue Jan 24 16:20:46.276840 2017] [core:notice] [pid 32146] AH00094: Command line: '/usr/sbin/httpd' access_log 90.152.127.182 - - [24/Jan/2017:16:21:03 +0000] "GET / HTTP/1.1" 400 437 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" 90.152.127.182 - - [24/Jan/2017:16:21:04 +0000] "GET /favicon.ico HTTP/1.1" 400 437 "http://xxx.yyy.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" ssl_access_log 90.152.127.182 - - [24/Jan/2017:16:12:24 +0000] "GET / HTTP/1.1" 400 434 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" 90.152.127.182 - - [24/Jan/2017:16:12:25 +0000] "GET /favicon.ico HTTP/1.1" 400 434 "https://xxx.yyy.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" What could be causing my 400 status?Before you ask, I have installed mod_ssl, yes.
Make sure you do not have illegal characters in your virtual hosts' ServerName . I ran into this issue while migrating "sub_domain.test.com" from Apache 2.2 to 2.4. The underscore in "sub_domain" caused Apache 2.4 to respond with 400 (Bad Request) but no further hint in the error log. Also have a look at https://stackoverflow.com/questions/2180465/can-domain-name-subdomains-have-an-underscore-in-it
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212312/" ] }
340,067
Consider the following input file: 1234 Running { grep -q 2; cat; } < infile doesn't print anything. I'd expect it to print 34 I can get the expected output if I change it to { sed -n 2q; cat; } < infile Why doesn't the first command print the expected output ? It's a seekable input file and per the standard under OPTIONS : -q Quiet. Nothing shall be written to the standard output, regardless of matching lines. Exit with zero status if an input line is selected. and further down, under APPLICATION USAGE (emphasize mine): The -q option provides a means of easily determining whether or not a pattern (or string) exists in a group of files. When searching several files, it provides a performance improvement ( because it can quit as soon as it finds the first match )[...] Now, per the same standard (in Introduction , under INPUT FILES ) When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility [...] tail -n +2 file(sed -n 1q; cat) < file... The second command is equivalent to the first only when the file is seekable. Why does grep -q consume the whole file ? This is gnu grep if it matters (though Kusalananda just confirmed the same happens on OpenBSD)
grep does stop early, but it buffers its input so your test is too short (and yes, I realise my test is imperfect since it's not seekable): seq 1 10000 | (grep -q 2; cat) starts at 6776 on my system. That matches the 32KiB buffer used by default in GNU grep: seq 1 6775 | wc outputs 6775 6775 32768 Note that POSIX only mentions performance improvements When searching several files That doesn't set any expectations up for performance improvements due to partially reading a single file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/340067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
340,109
I have two lists of files ending with two different extensions and I would like to concatenate them in pairs, in a loop. The file names look like these: These are the files a.ID, b.ID, c.ID, d.ID a.value, b.value, c.value, d.value intuitively I would do: for i in *.ID; do for j in *.value; do cat $i $j > $i.txt ; done done The problem is that I would like to merge a.ID with a.value and b.ID with b.value and in this way they are merged randomly. Like a.value with b.ID etc.. Any idea?Thanks in advance Samples input a.ID (for example): chr1_237301_237601 176 1 chr1_237601_237901 176 1 chr1_237901_238201 176 1 Samples ending with a.value (for example): chr1_1_301 0 0 chr1_301_601 0 0 chr1_601_901 0 0 chr1_901_1201 0 0 chr1_1201_1501 0 0 output: chr1_237301_237601 176 1 chr1_237601_237901 176 1 chr1_237901_238201 176 1 chr1_1_301 0 0 chr1_301_601 0 0 chr1_601_901 0 0 chr1_901_1201 0 0 chr1_1201_1501 0 0
You don't need two loops. You need a single loop, through a , b , c etc. Like this: for i in *.ID; do b=${i%%.ID} cat "$i" "$b".value >"$b".txtdone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136099/" ] }
340,143
I have a few locations that need the WiFi to be checked daily. Currently when I run my script this is the outcome I get. The first name corresponds to the first MAC, the first IP and so on. How may I go about re-arranging this file with grep , awk or sed ? Name : WiFi 1Name : WiFi 2Name : WiFi 3Name : WiFi 4Name : WiFi 5Name : WiFi 6Name : WiFi 7MAC : aa:aa:aa:aa:aa:aaMAC : bb:bb:bb:bb:bb:bbMAC : cc:cc:cc:cc:cc:ccMAC : dd:dd:dd:dd:dd:ddMAC : ee:ee:ee:ee:ee:eeMAC : ff:ff:ff:ff:ff:ffMAC : gg:gg:gg:gg:gg:ggIP : 10.0.1.0IP : 10.0.1.1IP : 10.0.1.2IP : 10.0.1.3IP : 10.0.1.4IP : 10.0.1.5IP : 10.0.1.6Status : OperationalStatus : OperationalStatus : OperationalStatus : OperationalStatus : OperationalStatus : OperationalStatus : OperationalInterface : X2Interface : X2Interface : X2Interface : X2Interface : X2Interface : X2Interface : X2 I'd like them all to output as shown below Name : WiFi 1MAC : aa:aa:aa:aa:aa:aaIP : 10.0.1.0Status : OperationalInterface : X2
Self-counting version, season to taste: awk ' $1!=last {n=0;last=$1} {++n;gaggle[n]=gaggle[n]"\n"$0} END { for (k in gaggle) print gaggle[k] }'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212428/" ] }
340,144
I am parsing a very large csv file, where the entry of column 26 has to be of length 10. I can see that there are cases where there are no entries (which is fine), but also cases where the entries have length less than 10 or greater than 10, that has to be erroneous. I am trying to print some of these lines to explore. My attempt is: awk 'length($26) < 10' my_file.csv | sort -u | cut -d ',' -f 26 | head but this doesn't return the result I want - instead it returns a number of rows where the length of column 26 is in fact equal to 10. What am I doing wrong?
awk -F, 'length($26) != 10 { print }' /path/to/input > bad_field_length.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193168/" ] }
340,283
In systemd service files, one can set the following scheduling related options (from the systemd.exec man page , correct me if I'm wrong): Nice Sets the default nice level (scheduling priority) for executed processes. Takes an integer between -20 (highest priority) and 19 (lowest priority). See setpriority(2) for details. Which is the familiar nice level. It seems its effect is ‘subverted’ somewhat due to the ‘autogroup’ feature of recent linux kernels. So the options below may be what I'd really want to set to keep processes behaving nicely for my desktop experience. CPUSchedulingPolicy Sets the CPU scheduling policy for executed processes. Takes one of other, batch, idle, fifo or rr. See sched_setscheduler(2) for details. CPUSchedulingPriority Sets the CPU scheduling priority for executed processes. The available priority range depends on the selected CPU scheduling policy (see above). For real-time scheduling policies an integer between 1 (lowest priority) and 99 (highest priority) can be used. See sched_setscheduler(2) for details. CPUSchedulingResetOnFork Takes a boolean argument. If true, elevated CPU scheduling priorities and policies will be reset when the executed processes fork, and can hence not leak into child processes. See sched_setscheduler(2) for details. Defaults to false. I understand the last option. I gather from the explanation of the first two that I can choose a scheduling policy and then, given that policy, a priority. It is not entirely clear to me what I should choose for which kind of tasks. For example, is it safe to choose ‘idle’ for backup tasks (relatively CPU intensive, because deduplicating), or is another one better suited? In general, getting an understandable overview of each policy, with each of its priorities and suitability for specific purposes is what I am looking for. Also the interaction with the nice level is of interest. Next to CPU scheduling, there is IO scheduling. I guess this corresponds to ionice (correct me if I'm wrong). IOSchedulingClass Sets the I/O scheduling class for executed processes. Takes an integer between 0 and 3 or one of the strings none, realtime, best-effort or idle. See ioprio_set(2) for details. IOSchedulingPriority Sets the I/O scheduling priority for executed processes. Takes an integer between 0 (highest priority) and 7 (lowest priority). The available priorities depend on the selected I/O scheduling class (see above). See ioprio_set(2) for details. We here see the same structure as with the CPU scheduling. I'm looking for the same kind of information as well. For all the ‘Scheduling’ options, the referred to man pages are not clear enough for me, mostly in translating things to a somewhat technically-inclined desktop user's point of view.
CPUScheduling{Policy|Priority} The link tells you that CPUSchedulingPriority should only be set for fifo or rr ("real-time") tasks. You do not want to force real-time scheduling on services. CPUSchedulingPolicy=other is the default. That leaves batch and idle . The difference between them is only relevant if you have multiple idle-priority tasks consuming CPU at the same time. In theory batch gives higher throughput (in exchange for longer latencies). But it's not a big win, so it's not really relevant in this case. idle literally starves if anything else wants the CPU. CPU priority is rather less significant than it used to be, for old UNIX systems with a single core. I would be happier starting with nice , e.g. nice level 10 or 14, before resorting to idle . See next section. However most desktops are relatively idle most of the time. And when you do have a CPU hog that would pre-empt the background task, it's common for the hog only to use one of your CPUs. With that in mind, I would not feel too risky using idle in the context of an average desktop or laptop. Unless it has an Atom / Celeron / ARM CPU rated at or below about 15 watts ; then I would want to look at things a bit more carefully. Is nice level 'subverted' by the kernel 'autogroup' feature? Yeah. Autogrouping is a little weird. The author of systemd didn't like the heuristic, even for desktops. If you want to test disabling autogrouping, you can set the sysctl kernel.sched_autogroup_enabled to 0 . I guess it's best to test by setting the sysctl in permanent configuration and rebooting, to make sure you get rid of all the autogroups. Then you should be able to nice levels for your services without any problem. At least in current versions of systemd - see next section. E.g. nice level 10 will reduce the weight each thread has in the Linux CPU scheduler, to about 10%. Nice level 14 is under 5%. (Link: full formula ) Appendix: is nice level 'subverted' by systemd cgroups? The current DefaultCPUAccounting= setting defaults to off, unless it can be enabled without also enabling CPU control on a per-service basis. So it should be fine. You can check this in your current documentation: man systemd-system.conf Be aware that per-service CPU control will also be enabled when any service sets CPUAccounting / CPUWeight / StartupCPUWeight / CPUShares / StartupCPUShares. The following blog extract is out of date (but still online). The default behaviour has since changed, and the reference documentation has been updated accordingly. As a nice default, if the cpu controller is enabled in the kernel, systemd will create a cgroup for each service when starting it. Without any further configuration this already has one nice effect: on a systemd system every system service will get an even amount of CPU, regardless how many processes it consists off. Or in other words: on your web server MySQL will get the roughly same amount of CPU as Apache, even if the latter consists a 1000 CGI script processes, but the former only of a few worker tasks. (This behavior can be turned off, see DefaultControllers= in /etc/systemd/system.conf.) On top of this default, it is possible to explicitly configure the CPU shares a service gets with the CPUShares= setting. The default value is 1024, if you increase this number you'll assign more CPU to a service than an unaltered one at 1024, if you decrease it, less. http://0pointer.de/blog/projects/resources.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26980/" ] }
340,394
I had a confusion with this topic between switching between command line and script file interface. I had a nice script written out in the command line that worked, but as soon as I wanted to save it to a .sed file, I remembered I could no longer use the -n . I tried using '!d' flag, but I'm not getting the same output. My question: Is there a way to put the -n in a .sed file, or some other way to stop the automatic printing when I'm in script file interface? I hate to have to convert from protecting my script from the shell to not protecting it, but I guess there is no way around it?
The standard (POSIX) way is to have #n at the start of the script. They have to be the first two bytes of the script. That precludes the use of a she-bang, that's only to be used for scripts run as sed -f the-script (note that she-bangs are not POSIX and POSIX doesn't specify the path of utilities), but as @Kusalananda said, when using a she-bang, you can always do: #! /path/to/sed -nf If you want to make an executable sed script on systems that support she-bangs with arguments (most). Note that the #n also works on the command line as in: sed '#n s/foo/bar/p' And you can add more text after like: #no default outputs/foo/bar/p Or: #nifty way to turn -n ons/foo/bar/p Actually, that's something to bear in mind to avoid turning -n on by mistake. It's a good idea to use a space after # in comments for that (and for legibility). # no, I don't want to turn -n ons/foo/bar/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211625/" ] }
340,430
I was curious about how hardware interacted with the OS and came across this post: How do keyboard input and text output work? It seems like a lot of the magic happens in the /dev/input directory. I decided to take a look on my own OS (Ubuntu 16.10) to see what I could find out. All of these files are listed as 0 bytes, and when I do sudo cat mouse0 | hexdump -C I get a ton of hexdata that looks like this: 00000000 b3 82 8a 58 00 00 00 00 53 74 09 00 00 00 00 00 |...X....St......|00000010 01 00 1c 00 00 00 00 00 b3 82 8a 58 00 00 00 00 |...........X....|00000020 53 74 09 00 00 00 00 00 00 00 00 00 00 00 00 00 |St..............|00000030 b6 82 8a 58 00 00 00 00 06 56 0e 00 00 00 00 00 |...X.....V......|00000040 01 00 10 00 01 00 00 00 b6 82 8a 58 00 00 00 00 |...........X....|00000050 06 56 0e 00 00 00 00 00 00 00 00 00 00 00 00 00 |.V..............| So I have a few questions: What is the purpose of this file? It seems to me that these device files are only used as middle-men to transfer the scancode from the kernel to the X server. Why not just send it directly from the kernel to the X server? Why there are so many? I have a little over 20 individual event files, but only one keyboard and mouse.
I'll go with the question in reverse order: Why are there so many? Those are devices that stand for most inputs present on a machine (there are others, a microphone for example will not be managed in /dev/input ). Contrary to the assumption that one keyboard plus one mouse would give 2 devices, even the simplest keyboard and simplest mouse would still give 6 of them. Why 6? Because Xorg will create a test input keyboard and a test input mouse (both virtual) during its startup. Also, it will aggregate the test keyboard with the actual keyboard into a main virtual device. i.e. it will perform muxing of the input. The same will happen to the test and actual mouse. Plus a typical computer (desktop or laptop) has other buttons apart from the keyboard: power button, sleep button. The eventN devices in there are devices for the things that Xorg creates and for what the computer have. The N comes from sequential IDs and is analogous to the IDs in xinput . For example on my machine I have: [~]# ls -l /dev/input/total 0drwxr-xr-x 2 root root 100 Jan 26 16:01 by-iddrwxr-xr-x 2 root root 140 Jan 26 16:01 by-pathcrw-rw---- 1 root input 13, 64 Jan 26 16:01 event0crw-rw---- 1 root input 13, 65 Jan 26 16:01 event1crw-rw---- 1 root input 13, 74 Jan 26 16:01 event10crw-rw---- 1 root input 13, 75 Jan 26 16:01 event11crw-rw---- 1 root input 13, 76 Jan 26 16:01 event12crw-rw---- 1 root input 13, 77 Jan 26 16:01 event13crw-rw---- 1 root input 13, 66 Jan 26 16:01 event2crw-rw---- 1 root input 13, 67 Jan 26 16:01 event3crw-rw---- 1 root input 13, 68 Jan 26 16:01 event4crw-rw---- 1 root input 13, 69 Jan 26 16:01 event5crw-rw---- 1 root input 13, 70 Jan 26 16:01 event6crw-rw---- 1 root input 13, 71 Jan 26 16:01 event7crw-rw---- 1 root input 13, 72 Jan 26 16:01 event8crw-rw---- 1 root input 13, 73 Jan 26 16:01 event9crw-rw---- 1 root input 13, 63 Jan 26 16:01 micecrw-rw---- 1 root input 13, 32 Jan 26 16:01 mouse0crw-rw---- 1 root input 13, 33 Jan 26 16:01 mouse1 And xinput gives me analogous IDs: [~]$ xinput list⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ Logitech USB Optical Mouse id=10 [slave pointer (2)]⎜ ↳ SynPS/2 Synaptics TouchPad id=14 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ USB 2.0 Camera id=11 [slave keyboard (3)] ↳ Asus Laptop extra buttons id=12 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=13 [slave keyboard (3)] (Look that eventN corresponds to id=N ) Without Xorg 1.1 What is the purpose of this file? Note that all random inputs (including my USB camera!) are seen by Xorg as part of the virtual keyboard. This allows for muxing and demuxing input. For example, I can move my mouse through my USB mouse or through my trackpad and an application does not need to know the difference. (The fact that the USB camera is part of the virtual keyboard is because it has a button to turn it on and off. And since a it is a button, it becomes part of the keyboard subsystem. The actual video input is handled in /sys/class/video4linux .) In other words, for an application there really is only one keyboard and only one mouse. But both Xorg and the kernel needs to know the differences. And this leads to the last part: 1.2 Why not just send it directly from the kernel to the X server? Because Xorg needs to know the difference. And there are situations in which it is useful. You can remap keys in Xorg to each slave input device differently. For example, I have a gaming set with pedals, when used in a racing game it outputs a , b and c for each of its pedals. Yet, when programming I remap these keys to Esc , Ctrl and Alt , without remapping the keys on the keyboard itself. Also, it isn't necessary that a machine runs Xorg. For example, on a headless server I can get the following output: [~]$ ls -l /dev/input/total 0drwxr-xr-x 2 root root 80 Nov 8 02:36 by-pathcrw-rw---- 1 root input 13, 64 Nov 8 02:36 event0crw-rw---- 1 root input 13, 65 Nov 8 02:36 event1crw-rw---- 1 root input 13, 66 Nov 8 02:36 event2 Where the input devices correspond to serial ports (notably in this case they do) instead of keyboard or mouse.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212657/" ] }
340,440
#!/bin/bashINT=-5if [[ "$INT" =~ ^-?[0-9]+$ ]]; thenecho "INT is an integer."elseecho "INT is not an integer." >&2exit 1fi What does the leading ~ do in the starting regular expression?
The ~ is actually part of the operator =~ which performs a regular expression match of the string to its left to the extended regular expression on its right. [[ "string" =~ pattern ]] Note that the string should be quoted, and that the regular expression shouldn't be quoted. A similar operator is used in the Perl programming language. The regular expressions understood by bash are the same as those that GNU grep understands with the -E flag, i.e. the extended set of regular expressions. Somewhat off-topic, but good to know: When matching against a regular expression containing capturing groups, the part of the string captured by each group is available in the BASH_REMATCH array. The zeroth/first entry in this array corresponds to & in the replacement pattern of sed 's substitution command (or $& in Perl), which is the bit of the string that matches the pattern, while the entries at index 1 and onwards corresponds to \1 , \2 , etc. in a sed replacement pattern (or $1 , $2 etc. in Perl), i.e. the bits matched by each parenthesis. Example: string=$( date +%T )if [[ "$string" =~ ^([0-9][0-9]):([0-9][0-9]):([0-9][0-9])$ ]]; then printf 'Got %s, %s and %s\n' \ "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}"fi This may output Got 09, 19 and 14 if the current time happens to be 09:19:14. The REMATCH bit of the BASH_REMATCH array name comes from "Regular Expression Match", i.e. "RE-Match". In non- bash Bourne-like shells, one may also use expr for limited regular expression matching (using only basic regular expressions). A small example: $ string="hello 123 world"$ expr "$string" : ".*[^0-9]\([0-9][0-9]*\)"123
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/340440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212669/" ] }
340,459
I'm scripting the modification of a config file laid out like this: [purple]auth = noenabled = 0username = password =priority = 0host = True[shovel]group = manual = Falseenabled = 0username = Where there are many [categories] and sometimes with the same key of a key/value pair. Is it possible to craft a one-liner using awk/sed/grep/tr/cut/perl that can change the value of "enabled = 0" to "enabled = 1" but ONLY for the category [shovel]?
In sed you can use a range (stopping on the empty line at the end of the [shovel] category): sed '/\[shovel\]/,/^$/ s/enabled = 0/enabled = 1/' file the first part /\[shovel\]/,/^$/ means find a line with [shovel] , keep going until you find an empty line, and do the following command(s) (in this case a simple s/old/new ) only on that part of file Note in response to comment: sed will not accept alternative delimiters in ranges and addresses (so escape any / characters that must be literal, if you need to match them), unless they are preceded by a backslash. You can always use an alternative delimiter in any following commands, for example: sed '/\[shovel\]/,/^$/ s|enabled = 0|enabled = 1|' file Or sed '\|\[shovel\]|, |^$| s|enabled = 0|enabled = 1|' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
340,478
I want to remove the line that contains set -x in the file `"$(which tsc)", but I get an error: $ sed -i ".bak" 'set -x' "$(which tsc)"sed: -e expression #1, char 1: unknown command: `.' I checked the solution here , but my line terminators are LF .
You have syntactic error. There can't be any space after -i , just the extension; this is the source of the (initial) error message. Also, to remove a line based on a pattern you need /<pattern>/ d with sed (there are other approaches but this one is the cleanest). So do: sed -i".bak" '/set -x/ d' "$(which tsc)" Optionally, as the backup extension does not contain any whitespace or control characters, you could get away without quoting in this case: sed -i.bak '/set -x/ d' "$(which tsc)"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90372/" ] }
340,532
Suppose that two numbers are stored in two different files, a.txt and b.txt . Each number is large enough (more than 30 digits) to not be supported by the numeric data type used by bash . How may I add them in the shell?
The trick is to not use bash to perform the addition 1 . First, read each number into a separate variable. This assumes that the files contain only a number and no other information. read a <a.txtread b <b.txt Then use the bc calculator to get the result: bc <<<"$a + $b" bc is an "arbitrary-precision arithmetic language and calculator". To store the result in a variable c : c="$( bc <<<"$a + $b" )" If the <<< syntax feels weird (it's called a "here-string" and is an extension to the POSIX shell syntax supported by bash and a few other shells), you may instead use printf to send the addition to bc : printf '%s + %s\n' "$a" "$b" | bc And storing the result in c again: c="$( printf '%s + %s\n' "$a" "$b" | bc )" 1 Using bash to perform the addition of two extremely large numbers would require the implementation, in the bash script, of a routine for doing arbitrary-precision arithmetic . This is perfectly doable, but cumbersome and unnecessary since every Unix comes with bc , which already provides this service to you in a relatively easy and accessible way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212767/" ] }
340,642
I'm working remotely on a new CentOS 7 VM. I have screen running with several open sessions. I'm finding that if I leave one of the sessions idle for awhile, then try to come back to it, it's gone. Nothing special going on in the sessions - ssh connections to other systems, mysqlclient, top, etc. - those all stay up. But if I just leave a session sitting at the bash shell prompt, it disappears - I just watched one and it took about 10-11 minutes. I've never had that happen before. Any idea what's going on? New information. It's probably not screen. I opened a new ssh session to this system and left it idle. And it closed on me as well. But this time I got a message: timed out waiting for input: auto-logout Off to google... probably some goofy shell setting?
A bash shell can be configured to exit after a certain amount of idle time. This value is defined with the TMOUT variable. For example, TMOUT=300 will cause the shell to exit after 5 minutes (300 seconds) of inactivity.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202968/" ] }
340,649
To watch all videos with mpv in predefined sorted order one can do mpv /path/to/videos/* because mpv can accept multiple files as its argument and play them one after another. I would like to pass to mpv those files but randomly ordered, so each time I start watching it comes up with something unexpected. Here what I've tried so far ls /path/to/videos/* | sort -R | while read file; do mpv $file; done This variant does not satisfy my needs, since it starts new instance for each video and new window always gets focused.
A bash shell can be configured to exit after a certain amount of idle time. This value is defined with the TMOUT variable. For example, TMOUT=300 will cause the shell to exit after 5 minutes (300 seconds) of inactivity.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340649", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194249/" ] }
340,650
I'd like to run a series of tasks but stop should any of them fail, hence I've written (something like): for task in [TASKS]; do process "$task" || break commit "$task"done This works fine, but (as specified ) the exit status of this loop is zero even if we break early. Ideally break -ing would be able to convey the failure. I know returning 0 is the documented behavior of break , but I'm curious if there are any relatively clean workarounds. The best I can imagine is wrapping this in a function and setting a didBreak variable, and using that as the exit status (of the function). It'd work, but it feels overly-complex.
Using ! break works in many shells (all but the pdksh based ones and the sh of FreeBSD ( by design ) in my tests): $ zsh -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ bash -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ ksh88 -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ ksh93 -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ dash -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ yash -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ bosh -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"1$ pdksh -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"0$ mksh -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"0$ posh -c 'for i in x; do ! break; echo "$i"; done'; echo "$?"0 Note that it doesn't trigger errexit in any of them. It was discussed on the austin-group (the body behind POSIX) mailing list last year. The discussion (which involved the maintainers of bosh , FreeBSD sh and NetBSD sh ) died out before a consensus was reached, but the prevalent view seemed to be that POSIX required that behaviour as ! is documented as negating the exit status of commands and break is a special builtin command that exits with 0 exit status. But if you apply the same reasoning to return for instance, you'll see that fewer shells comply. In zsh , you can use return in an anonymous function instead of break : $ () for i in x y; do echo $i; return 1; donex$ echo $?1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19157/" ] }
340,676
In what situations would one want to use a hard-link rather than a soft-link? I personally have never run across a situation where I'd want to use a hard-link over a soft-link, and the only use-case I've come across when searching the web is deduplicating identical files .
Aside from the backup usage mentioned in another comment, which I believe also includes the snapshots on a BTRFS volume, a use-case for hard-links over soft-links is a tag-sorted collection of files. (Not necessarily the best method to create a collection, a database-driven method is potentially better, but for a simple collection that's reasonably stable, it's not too bad.) A media collection where all files are stored in one, flat, directory and are sorted into other directories based on various criteria, i.e.: year, subject, artist, genre, etc. This could be a personal movie collection, or a commercial studio's collective works. Essentially finished, the file is saved, not likely to be modified, and sorted, possibly into multiple locations by links. Bear in mind that the concept of "original" and "copy" are not applicable to hard-links: every link to the file is an original, there is no "copy" in the normal sense. For the description of the use-case, however, the terms mimic the logic of the behavior. The "original" is saved in the "catalog" directory, and the sorted "copies" are hard-linked to those files. The file attributes on the sorting directories can be set to r/o, preventing any accidental changes to the file-names and sorted structure, while the attributes on the catalog directory can be r/w allowing it to be modified as needed. (Case for that would be music files where some players attempt to rename and reorganize files based on tags embedded in the media file, from user input, or internet retrieval.) Additionally, since the attributes of the "copy" directories can be different than the "original" directory, the sorted structure could be made available to the group, or world, with restricted access while the main "catalog" is only accessible to the principal user, with full access. The files themselves, however will always have the same attributes on all links to that inode. (ACL could be explored to enhance that, but not my knowledge area.) If the original is renamed, or moved (the single "catalog" directory becomes too large to manage, for example) the hard-links remain valid, soft-links are broken. If the "copies" are moved and the soft-links are relative, then the soft-links will, again, be broken, and the hard-links will not be. Note: there seems to be inconsistency on how different tools report disk usage when soft-links are involved. With hard-links, however, it seems consistent. So with 100 files in a catalog sorted into a collection of "tags", there could easily be 500 linked "copies." (For an photograph collection, say date, photographer, and an average of 3 "subject" tags.) Dolphin, for example, would report that as 100 files for hard-links, and 600 files if soft-links are used. Interestingly, it reports that same disk-space usage either way, so it looks like a large collection of small files for soft-links, and a small collection of large files for hard-links. A caveat to this type of use-case is that in file-systems that use COW, modifying the "original" could break the hard-links, but not break the soft-links. But, if the intent is to have the master copy, after editing, saved, and sorted, COW doesn't enter the scenario.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/340676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41033/" ] }
340,718
I'm trying to bring HEREDOC text into a shell script variable in a POSIX compliant way. I tried like so: #!/bin/shNEWLINE=""read_heredoc2() { while IFS="$NEWLINE" read -r read_heredoc_line; do echo "${read_heredoc_line}" done}read_heredoc2_result="$(read_heredoc2 <<'HEREDOC' _ _ _ | | | (_) _ __ ___ _ _ _ __ | | __ _ ___ ___ ___ _ __ | |_ _ __ ___ | '_ ` _ \| | | | '_ \| |/ _` |/ __/ _ \/ _ \| '_ \| | | '_ \ / _ \ | | | | | | |_| | |_) | | (_| | (_| __/ (_) | | | | | | | | | __/ |_| |_| |_|\__, | .__/|_|\__,_|\___\___|\___/|_| |_|_|_|_| |_|\___| __/ | | |___/|_|HEREDOC)"echo "${read_heredoc2_result}" That produced the following which is wrong: _ _ _ | | | (_) _ __ ___ _ _ _ __ | | __ _ ___ ___ ___ _ __ | |_ _ __ ___ | '_ ` _ \| | | | '_ \| |/ _` |/ __/ _ \/ _ \| '_ \| | | '_ \ / _ | | | | | | |_| | |_) | | (_| | (_| __/ (_) | | | | | | | | | __/ |_| |_| |_|\__, | .__/|_|\__,_|\___\___|\___/|_| |_|_|_|_| |_|\___| __/ | | |___/|_| The following works but I don't like how clunky it is by using a random output variable: #!/bin/shNEWLINE=""read_heredoc1() { read_heredoc_first=1 read_heredoc_result="" while IFS="$NEWLINE" read -r read_heredoc_line; do if [ ${read_heredoc_first} -eq 1 ]; then read_heredoc_result="${read_heredoc_line}" read_heredoc_first=0 else read_heredoc_result="${read_heredoc_result}${NEWLINE}${read_heredoc_line}" fi done}read_heredoc1 <<'HEREDOC' _ _ _ | | | (_) _ __ ___ _ _ _ __ | | __ _ ___ ___ ___ _ __ | |_ _ __ ___ | '_ ` _ \| | | | '_ \| |/ _` |/ __/ _ \/ _ \| '_ \| | | '_ \ / _ \ | | | | | | |_| | |_) | | (_| | (_| __/ (_) | | | | | | | | | __/ |_| |_| |_|\__, | .__/|_|\__,_|\___\___|\___/|_| |_|_|_|_| |_|\___| __/ | | |___/|_| HEREDOCecho "${read_heredoc_result}" Correct output: _ _ _ | | | (_) _ __ ___ _ _ _ __ | | __ _ ___ ___ ___ _ __ | |_ _ __ ___ | '_ ` _ \| | | | '_ \| |/ _` |/ __/ _ \/ _ \| '_ \| | | '_ \ / _ \ | | | | | | |_| | |_) | | (_| | (_| __/ (_) | | | | | | | | | __/ |_| |_| |_|\__, | .__/|_|\__,_|\___\___|\___/|_| |_|_|_|_| |_|\___| __/ | | |___/|_| Any ideas?
The problem is that, in Bash , inside $( ... ) escape (and other) sequences get parsed, even though the heredoc itself wouldn't have them. You get a doubled line because \ escapes the line break. What you're seeing is really a parsing issue in Bash - other shells don't do this. Backticks can also be a problem in older versions. I have confirmed that this is a bug in Bash , and it will be fixed in future versions. You can at least simplify your function drastically: func() { res=$(cat)}func <<'HEREDOC'...HEREDOC If you want to choose the output variable it can be parameterised: func() { eval "$1"'=$(cat)'}func res<<'HEREDOC'...HEREDOC Or a fairly ugly one without eval : { res=$(cat) ; } <<'HEREDOC'...HEREDOC The {} are needed, rather than () , so that the variable remains available afterwards. Depending on how often you'll do this, and to what end, you might prefer one or another of these options. The last one is the most concise for a one-off. If you're able to use zsh , your original command substitution + heredoc will work as-is, but you can also collapse all of this down further: x=$(<<'EOT'...EOT) Bash doesn't support this and I don't think any other shell that would experience the problem you're having does either.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212882/" ] }
340,723
From http://cdimage.ubuntu.com/xenial/daily-live/current/ I can download the latest stable build of Ubuntu iso. Where can I find the latest build of the Fedora iso?
Fedora does have their release isos online @ https://getfedora.org/en/workstation/download/ . For the nightly composes: Each day (or sometimes more than once per day) , a full 'compose' of the tree is attempted. This will usually result in the creation of all or most of the usual install, live and disk images, installer trees and so forth. The composes are synced to the /fedora/linux/development/ directory on the mirrors, and you can find the images there. The rawhide repository: In Rawhide - Fedora's rolling release repository, from which release are Branched before finally going stable - rawhide is the only repository. All package builds are sent there. It is represented for Yum or DNF in the fedora-rawhide.repo file in the repository path. For any system running Rawhide, it should be enabled. For any other system, it should not. The rawhide repositories for the various primary architectures can be found in the /fedora/linux/development/rawhide directory on the mirrors, and can also be queried from MirrorManager . For instance, https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-rawhide&arch=x86_64 will return mirrors for the x86_64 fedora repository for Rawhide.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
340,740
I had to install Android studio in /opt fedora 25. I don't want to run it using sudo for various reasons. Right now I'm not setting a root password because of that. I'm not sure if I should change /opt permissions to 755 or if there's a better option. I see the following if I try make my account password protected and run Android studio without sudo ./studio.sh OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=350m; support was removed in 8.0Looking in classpath from com.intellij.util.lang.UrlClassLoader@6d5380c2 for /com/sun/jna/linux-x86-64/libjnidispatch.soFound library resource at jar:file:/opt/android-studio/lib/jna.jar!/com/sun/jna/linux-x86-64/libjnidispatch.soTrying /home/user/.AndroidStudio2.2/system/tmp/jna4343912368523517735.tmpFound jnidispatch at /home/user/.AndroidStudio2.2/system/tmp/jna4343912368523517735.tmp[ 47553] WARN - dea.updater.SdkComponentSource - File /home/user/.android/repositories.cfg could not be loaded. [ 47830] WARN - s.RepoProgressIndicatorAdapter - File /home/user/.android/repositories.cfg could not be loaded.
Fedora does have their release isos online @ https://getfedora.org/en/workstation/download/ . For the nightly composes: Each day (or sometimes more than once per day) , a full 'compose' of the tree is attempted. This will usually result in the creation of all or most of the usual install, live and disk images, installer trees and so forth. The composes are synced to the /fedora/linux/development/ directory on the mirrors, and you can find the images there. The rawhide repository: In Rawhide - Fedora's rolling release repository, from which release are Branched before finally going stable - rawhide is the only repository. All package builds are sent there. It is represented for Yum or DNF in the fedora-rawhide.repo file in the repository path. For any system running Rawhide, it should be enabled. For any other system, it should not. The rawhide repositories for the various primary architectures can be found in the /fedora/linux/development/rawhide directory on the mirrors, and can also be queried from MirrorManager . For instance, https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-rawhide&arch=x86_64 will return mirrors for the x86_64 fedora repository for Rawhide.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15433/" ] }
340,764
Question: Should I use fdisk when creating partitions? Or is it advisable to use parted since it uses GPT? (by default?) And with that I can create partitions larger than 2TB.
MBR, Master Boot Record Wikipedia excerpt; link : A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-compatible systems and beyond. The concept of MBRs was publicly introduced in 1983 with PC DOS 2.0. I intentionally copy-pasted this for you to see that MBR comes all the way from 1983. GPT, GUID Partition Table Wikipedia excerpt; link : GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical storage device used in a desktop or server PC, such as a hard disk drive or solid-state drive, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of master boot record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information on a traditionally 512 byte disk sector. To answer your question, I advise you to use GPT partitioning where possible; in other words, if you don't have to use MBR, use GPT instead. GPT advantages over MBR it has a backup partition table it has no ridiculous limit for primary partitions , it allows for up to 128 partitions without having to extend it also stores cyclic redundancy check (CRC) values to check that its data is intact as you mentioned, it supports large drives , the maximum size is 8 ZiB (2^64 sectors × 2^9 bytes per sector) The usual tools MBR in a CLI: fdisk ( link to manual ); note: fdisk from linux-utils 2.30.2 partially understands GPT now GPT in a CLI: gdisk ( link to manual ) For both MBR and GPT in a CLI: parted ( link to manual ) For both MBR and GPT in a GUI: gparted ( link to wiki )
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196913/" ] }
340,834
I know pgrep , top , and ps all query /proc filesystems. So far so good. Yet what got me thinking is that in the past there was no /proc filesystem. Even nowadays, Mac OS X, as far as I know, has no /proc filesystem, yet top still accesses process info, which sugggests to me such info should be coming from kernel directly. My question, however, is specific to Linux. Which libraries and/or syscalls can be used to query process information directly, bypassing /proc ?
It is possible to query process information from the Linux kernel directly — by reading files under /proc . This is the way it's done on Linux, Solaris and several other Unix variants. Ancient Unix systems had a ps command that was setuid root and mapped some kernel memory (through /dev/kmem or similar) and parsed kernel data structures. This required ps to have privileges (dangerous) and to be tied to the exact kernel version (inconvenient). On modern *BSD systems, ps works by calling the sysctl function, which in turn makes system calls to retrieve information formatted as structures defined by a binary format. MacOS uses the same mechanism. Linux does not have this BSD interface. It uses procfs and sysfs to allow userland to retrieve information from the kernel. Where BSD marshals information in a binary format retrieved by a special-purpose system call, Linux marshals information as strings retrieved through ordinary file access to a special-purpose filesystem. It would be possible to use the same method as in ancient Unix systems, but nobody does it because it's such an inferior method (requires privileges and requires updating whenever the kernel data structures change).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85039/" ] }
340,835
I have a SoC device (e.g.: a Raspberry Pi), and a laptop. I would like to create a link over direct cable Ethernet using a "cross over" cable between my machine and the Pi, so I can connect to it with SSH. The Raspberry Pi is headless and I don't have the option of plugging it in to a network, or plug in a keyboard to change configuration. So I cannot set up a manual IP on the Raspberry Pi, and I cannot get it to be given an IP by a router. The Raspberry Pi by default is looking for a DHCP server on the Ethernet port. I think what I need is to set up a temporary DHCP server on that interface, some guidance would be useful, or is there another solution I haven't thought of?
It is possible to query process information from the Linux kernel directly — by reading files under /proc . This is the way it's done on Linux, Solaris and several other Unix variants. Ancient Unix systems had a ps command that was setuid root and mapped some kernel memory (through /dev/kmem or similar) and parsed kernel data structures. This required ps to have privileges (dangerous) and to be tied to the exact kernel version (inconvenient). On modern *BSD systems, ps works by calling the sysctl function, which in turn makes system calls to retrieve information formatted as structures defined by a binary format. MacOS uses the same mechanism. Linux does not have this BSD interface. It uses procfs and sysfs to allow userland to retrieve information from the kernel. Where BSD marshals information in a binary format retrieved by a special-purpose system call, Linux marshals information as strings retrieved through ordinary file access to a special-purpose filesystem. It would be possible to use the same method as in ancient Unix systems, but nobody does it because it's such an inferior method (requires privileges and requires updating whenever the kernel data structures change).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172925/" ] }
340,844
I am unable to ssh to a server that asks for a diffie-hellman-group1-sha1 key exchange method: ssh 123.123.123.123Unable to negotiate with 123.123.123.123 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 How to enable the diffie-hellman-group1-sha1 key exchange method on Debian 8.0? I have tried (as proposed here ) to add the following lines to my /etc/ssh/ssh_config KexAlgorithms diffie-hellman-group1-sha1,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1Ciphers 3des-cbc,blowfish-cbc,aes128-cbc,aes128-ctr,aes256-ctr regenerate keys with ssh-keygen -A restart ssh with service ssh restart but still get the error.
The OpenSSH website has a page dedicated to legacy issues such as this one. It suggests the following approach, on the client : ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 123.123.123.123 or more permanently, adding Host 123.123.123.123 KexAlgorithms +diffie-hellman-group1-sha1 to ~/.ssh/config . This will enable the old algorithms on the client , allowing it to connect to the server.
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/340844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204816/" ] }
340,846
I have chrooted Debian in android marshmallow (snapdragon 650 [64bit]). I installed iceweasel in chrooted debian. But it showed this error :: (firefox:16210): Gdk-WARNING **: shmget failed: error 38 (Function not implemented) Segmentation fault So, I compiled libandroid-shmem.so from this repo using android-ndkand copied from armv8-a folder to /lib directory of chrooted debian. It then asked for liblog.so . iceweasel: error while loading shared libraries: liblog.so: cannot open shared object file: No such file or directory So I copied liblog.so from android-ndk to chrooted debian /lib directory. Now when I run env LD_PRELOAD="/lib/libandroid-shmem.so" iceweasel . It displays this error : iceweasel: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libc.so: invalid ELF header Here are some details :: file /lib/libandroid-shmem.so /lib/libandroid-shmem.so: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, BuildID[sha1]=5ad4582c76effbe27a6688369ad979fea5dfac2a, stripped$ cat /usr/lib/aarch64-linux-gnu/libc.so/* GNU ld script Use the shared library, but some functions are only in the static library, so try that secondarily. */OUTPUT_FORMAT(elf64-littleaarch64)GROUP ( /lib/aarch64-linux-gnu/libc.so.6 /usr/lib/aarch64-linux-gnu/libc_nonshared.a AS_NEEDED ( /lib/aarch64-linux-gnu/ld-linux-aarch64.so.1 ) )
The OpenSSH website has a page dedicated to legacy issues such as this one. It suggests the following approach, on the client : ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 123.123.123.123 or more permanently, adding Host 123.123.123.123 KexAlgorithms +diffie-hellman-group1-sha1 to ~/.ssh/config . This will enable the old algorithms on the client , allowing it to connect to the server.
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/340846", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87903/" ] }
340,855
I have the following line in my bash file: LIST=$(ssh 192.168.0.22 'ls -1 /web'); The problem I am having is that it is a part of automated script and I often get this on the stdout and not the data I need: ssh_exchange_identification: Connection closed by remote host I realize that LIST only gets the stdout of the ls . So I am looking for a command that would get more of the info from the commands. In particular: stdout for ls - I have that right now stderr for ls - not really interested, I don't expect a problem there stdout for ssh - Not interested, I don't even know what it would output stderr for ssh - THIS IS WHAT I AM LOOKING FOR to check whether it ssh correctly. This being empty should mean that I have the data in $LIST I expect
From ssh man page on Ubuntu 16.04 (LTS): EXIT STATUS ssh exits with the exit status of the remote command or with 255 if an error occurred. Knowing that, we can check exit status of ssh command. If exit status was 225 , we know that it's an ssh error, and if it's any other non-zero value - that's ls error. #!/bin/bashTEST=$(ssh $USER@localhost 'ls /proc' 2>&1)if [ $? -eq 0 ];then printf "%s\n" "SSH command successful"elif [ $? -eq 225 ] printf "%s\n%s" "SSH failed with following error:" "$TEST"else printf "%s\n%s" "ls command failed" "$TEST"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212982/" ] }
340,883
I would like to use bash or shell script and replace anything between the two parentheses with an empty space. The text between the two parentheses could be in multiple lines, such as: myFunction (line0line1line2line3line4) that I would like to convert to: myFunction ( )
AWK AWK allows executing code-block {} on range of conditions. In this case, we want to execute gsub() on every line in range from the one that contains ( to the one that contains ) . $ awk '$0~/[(]/,$0~/[)]/{gsub(/line/,"newline")};1' input.txt another linesomething elsemyFunction (newline0newline1whatevernewline2newline3newline4)some other line Python (original answer) Here's a quick python script that does the job: #!/usr/bin/env python3from __future__ import print_functionimport syswith open(sys.argv[1]) as fp: flag = None for line in fp: clean_line = line.strip() if "(" in clean_line: flag = True if flag: clean_line = clean_line.replace("line","newline") print(clean_line) if ")" in clean_line: flag = False Test run: $ cat input.txt another linesomething elsemyFunction (line0line1lilne2line3line4)some other line$ ./edit_function_args.py input.txt another linesomething elsemyFunction (newline0newline1newline2newline3line4)some other line BASH version The same script, except rewritten in bash with sed #!/bin/bashflag=falsewhile IFS= read -r linedo if grep -q '(' <<< "$line" then flag=true fi if $flag then line=$(sed 's/line/newline/' <<< "$line") fi printf "%s\n" "$line" if grep -q ')' <<< "$line" then flag=false fidone < "$1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212999/" ] }
340,912
My guest OS is an x86 (.vmdk format) and it seems from QEMU documentation that since my host is an ARM Raspberry Pi 3, I can't take advantage of KVM even after enabling it in the kernel. Is that correct?
The naive way to run a virtual machine is to interpret each instruction. The VM software decodes each instruction and runs it. When the instruction set of the virtual machine is the same as the host, an alternative method is to simply execute the instructions. Only a few instructions can't be executed directly because the guest doesn't have full control over the hardware. A sticky point is memory accesses: the guest doesn't have access to the whole memory, so a translation needs to be performed on the addresses. High-end CPUs such as x86 CPUs with the VT-x (Intel) or AMD-V (AMD) extension, or ARM Cortex-A15 and above (including the Pi 2 and the Pi 3), have hardware features to perform this address translation. KVM is a component of the Linux kernel that takes advantage of these instructions to allow a code in a virtual machine to be executed directly by the native processor. This doesn't help you, because you aren't trying to execute ARM code on an ARM CPU, or x86 on an x86 CPU. You want to execute x86 code on an ARM CPU. For this, software to decode and interpret the instructions is necessary. KVM doesn't help here.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/340912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213011/" ] }
340,982
I want to use SSH to access a remote machine. However, I do not know how to find the HostName of a machine. I tried using the hostname command, but that only gives the local address of the machine, which (I think) can be same across different machines. When I try ssh name with name being the hostname returned by the hostname command, I get an error saying that the "hostname is not recognized." How do I find the complete hostname which I can use to distinguish the target machine? PS: I am fully authorized to use the target machine.
To access a remote machine using ssh , you will need to know either its public host name or its IP address. You will also need to have either a username and password for an account on the other machine or an SSH private key corresponding to an SSH public key installed on a particular account on the machine you're connecting to. If you do not know these details, you will have to ask someone who's administrating the other machine for how to best connect with ssh . The hostname command is exclusively used for setting or displaying the name of the system it's being run on. If you run it locally, it will return the name of your local machine, which may or may not be a fully qualified host name (in either case, it's the name/IP-number of the local machine).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213062/" ] }
340,997
See the following examples and their outputs in POSIX shells: false;echo $? or false || echo 1 : 1 false;foo="bar";echo $? or foo="bar" && echo 0 : 0 foo=$(false);echo $? or foo=$(false) || echo 1 : 1 foo=$(true);echo $? or foo=$(true) && echo 0 : 0 As mentioned by the highest-voted answer at https://stackoverflow.com/questions/6834487/what-is-the-variable-in-shell-scripting : $? is used to find the return value of the last executed command. This is probably a bit misleading in this case, so let's get the POSIX definition which is also quoted in a post from that thread: ? Expands to the decimal exit status of the most recent pipeline (see Pipelines). So it appears as if a assignment itself counts as a command (or rather a pipeline part) with a zero exit value but which applies before the right side of the assignment (e.g. the command substitution calls in my examples here). I see how this behavior makes sense from a practical standpoint but it seems somewhat unusual to me that the assignment itself would count in that order. Maybe to make more clear why it's strange to me, let's assume the assignment was a function: ASSIGNMENT( VARIABLE, VALUE ) then foo="bar" would be ASSIGNMENT( "foo", "bar" ) and foo=$(false) would be something like ASSIGNMENT( "foo", EXECUTE( "false" ) ) which would mean that EXECUTE runs first and only afterwards ASSIGNMENT is run but it's still the EXECUTE status that matters here. Am I correct in my assessment or am I misunderstanding/missing something?Are those the right reasons for me viewing this behavior as "strange"?
The exit status for assignments is strange . The most obvious way for an assignment to fail is if the target variable is marked readonly . $ err(){ echo error ; return ${1:-1} ; }$ PS1='$? $ '0 $ err 42error42 $ A=$(err 12)12 $ if A=$(err 9) ; then echo wrong ; else E=$? ; echo "E=$E ?=$?" ; fiE=9 ?=00 $ readonly A0 $ if A=$(err 10) ; then echo wrong ; else E=$? ; echo "E=$E ?=$?" ; fiA: is read only1 $ Note that neither the true nor false paths of the if statement were taken, the assignment failing stopped execution of the entire statement. bash in POSIX mode and ksh93 and zsh will all abort a script if an assignment fails. To quote the POSIX standard on this : A command without a command name, but one that includes a command substitution, has an exit status of the last command substitution that the shell performed. This is exactly the part of the shell grammar involved in foo=$(err 42) which comes from a simple_command (simple_command → cmd_prefix → ASSIGNMENT_WORD). So if an assignment succeeds then the exit status is zero unless command substitution was involved, in which case the exit status is the status of the last one. If the assignment fails then the exit status is non-zero, but you may not be able to catch it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/340997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117599/" ] }
341,060
systemctl Returns a list of the units, whether they are loaded, active, their sub and description. systemctl is-failed Returns a list of status only. What is the syntax to return the details of the failed units?
You can use systemctl list-units --state=failed to list all failed units. The parameters for systemctl are documented in the man page systemctl(1) .
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/341060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101992/" ] }
341,063
if the network interface is disconnected: ping 8.8.8.8connect: Network is unreachable terminates nicely kernel is sending a specific signal to the ping and thus ping is shutting itself down. But if network interface is up and I am blocking all traffic via iptables.. vi /etc/sysconfig/iptables *filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A OUTPUT -j REJECT --reject-with icmp-net-unreachable-A INPUT -j DROP-A FORWARD -j DROPCOMMIT but it will not make ping shut off. and stop. ping 8.8.8.8PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.From 192.168.0.100 icmp_seq=1 Destination Net UnreachableFrom 192.168.0.100 icmp_seq=1 Destination Net UnreachableFrom 192.168.0.100 icmp_seq=1 Destination Net Unreachable it simply keeps on continuing. I have tried other --reject-with flags such as: icmp-net-unreachableicmp-host-unreachableicmp-port-unreachableicmp-proto-unreachableicmp-net-prohibitedicmp-host-prohibitedicmp-admin-prohibited none of them can make ping quit. What I want to see is ping terminate the same way it terminates when network interface is disconnected. if this can not be done via iptables.. is there a command I can run to send ping the same signal the kernel sends .. to tell it "network interface is not connected" ? ( it would be a lie but I want it to shut itself off basically )
You can use systemctl list-units --state=failed to list all failed units. The parameters for systemctl are documented in the man page systemctl(1) .
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/341063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
341,093
I am trying to get * .local domains to use the DNS server with vagrant-dns. In order for that to work I set up dnsmasq to run in front of it. NetworkManager is installed but is set to dns=none resolve.conf: nameserver 127.0.0.1 #this points to dnsmasq Testing resolve: $ nslookup domain.localServer: 127.0.0.1Address: 127.0.0.1#53Name: domain.localAddress: 10.222.222.22 Dig resolves the same: $ dig domain.local; <<>> DiG 9.10.3-P4-Debian <<>> domain.local;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18052;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;domain.local. IN A;; ANSWER SECTION:domain.local. 86400 IN A 10.222.222.22;; Query time: 1 msec;; SERVER: 127.0.0.1#53(127.0.0.1);; WHEN: Sun Jan 29 19:18:52 CST 2017;; MSG SIZE rcvd: 49 That is the correct address. I can ping the ip: ping 10.222.222.22PING 10.222.222.22 (10.222.222.22) 56(84) bytes of data.64 bytes from 10.222.222.22: icmp_seq=1 ttl=64 time=0.185 ms But I can't ping the address: $ ping domain.localping: domain.local: Name or service not known I also tried from a browser to load the page hosted there, but I get a DNS error. The strange thing is that all other site seem to work fine, although I can't tell if it's using the localhost DNS server or not. Using debian 8 Jessie/testing
I found the answer! So most of you will know that the /etc/hosts file will resolve domains, somewhat like a DNS server. But how does the system know to look in that file? And how does it know what order to look check that file or a DNS server? There is a file: /etc/nsswitch.conf I had the line: hosts: files myhostname mdns4_minimal [NOTFOUND=return] dns This means first check files, like /etc/hosts. Then check the system hostname. Then there is mdns4, which I believe is the protocol for finding other machines on the local network. After mdns4 is what was holding me up. [NOTFOUND=return] . mdns looks for names ending in .local . If it can't find one, it doesn't just pass to the next and final search method dns , it will actually stop and tell your system that the domain does not exist. Since the domain I set up in dnsmasq was a .local domain, it would never get there. So there are two ways to fix this. The first is to remove [NOTFOUND=return] . This is the way I chose, and it works great. There is a small delay because I think mdns sees the .local and attempts to look it up anyway before passing it to dns . This is what my file looks like now: hosts: files myhostname mdns4_minimal dns Another option, since I don't really use mdns, is I could either remove it completely, or there was a way to tell it to use a different tld like .alocal instead - but I think that would effectively disable it also.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212467/" ] }
341,159
I have a chromium running on a raspberry pi showing some monitor stats. Sometimes, the startup fails opening the relevant tabs and hence I want to restart chromium. Yet running: killall chromium-browser only kills the tabs , showing me the "oh snap" information. I want the entire chromium to shutdown. Running the same command again yields in: killall chromium-browser chromium-browser: no process found even though I see the oh snap tab.
To exit Chromium gracefully use the following command: pkill -o chromium If you do not specify the "-o" ("--oldest") flag you might see the "Chromium didn't shut down correctly" pop-up next time you start Chromium. Because signal will be received by all chromium processes instead of just the main one. Based on this answer .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12471/" ] }
341,174
> man pvremovePVREMOVE(8) System Manager's Manual PVREMOVE(8)NAME pvremove — remove a physical volumeSYNOPSIS pvremove [--commandprofile ProfileName] [-d|--debug] [-h|--help] [-t|--test] [-v|--verbose] [--version] [-f[f]|--force [--force]] [--reportformat {basic|json}] [-y|--yes] PhysicalVolume [PhysicalVolume...]DESCRIPTION pvremove wipes the label on a device so that LVM will no longer recognise it as a physical volume.OPTIONS See lvm(8) for common options. -ff, --force --force Force the removal of a physical volume belonging to an existing volume group. Normally vgreduce(8) should be used instead of this command. You cannot remove a physical volume which in use by some active logical volume. -y, --yes Answer yes to all questions.SEE ALSO lvm(8), pvcreate(8), pvdisplay(8), vgreduce(8)Sistina SoftwaLVMUTOOLS 2.02.166(2)-RHEL7 (2016-09-28) PVREMOVE(8) Q: Why the two "f"s?
It's a safety switch, kind of like the --please-destroy-my-drive option in hdparm . By default the program will refuse to do such a thing (as it will likely result in something broken) but it has an option to override, for people who really really really know what they are doing (at least, in their imagination). Explanation as provided by the program itself (in addition to the manpage you already quoted) # pvremove /dev/loop0 PV /dev/loop0 is used by VG foobar so please use vgreduce first. (If you are certain you need pvremove, then confirm by using --force twice.)# pvremove --force /dev/loop0 PV /dev/loop0 is used by VG foobar so please use vgreduce first. (If you are certain you need pvremove, then confirm by using --force twice.)# pvremove --force --force /dev/loop0 WARNING: PV /dev/loop0 is used by VG foobarReally WIPE LABELS from physical volume "/dev/loop0" of volume group "foobar" [y/n]? y WARNING: Wiping physical volume label from /dev/loop0 of volume group "foobar" Labels on physical volume "/dev/loop0" successfully wiped. It really doesn't want to do it and even asks for confirmation after using -ff (if ran in interactive mode). As to why --force twice, wouldn't once be enough? LVM uses --force in other places for slightly less critical actions, so it's probably to catch people who are already in the habit of using a single --force with other LVM commands.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196913/" ] }
341,179
I was wondering if there is a feature in linux like OSX "shake to locate cursor", which temporarily makes the user's mouse or trackpad cursor much larger when shaken back and forth, making it easier to locate if the user loses track of it.
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed". I'm not sure if something similar is available on other distros. Not quite what you asked for. Possibly useful?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213220/" ] }
341,187
I have installed a Red Hat 6.8 machine, on which I have installed a certificate on the default keystore 'cacerts' successfully. When trying to invoke a software which is using SSL and is trying to access the keystore 'cacerts' (invoked as applicative user - not root), I receive the following error message: 'java.io.FileNotFoundException: Permission denied'. From my research online, any user should have access to the 'cacerts' keystore (although the owner of the file is 'root').
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed". I'm not sure if something similar is available on other distros. Not quite what you asked for. Possibly useful?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213219/" ] }
341,196
I have written a script to search an .ini file for some specific word. The .ini file has one of two different names, (let's say config.ini or configuration.ini ), so I have to check in both. I am doing it with the following if sentences by using logical OR : HAS_SOME_WORD=FALSEif [ "$(grep -v '^;\|^\[' "path_to_file/config.ini" | \ grep -c '\\some_word')" -ge 1 ] \ || [ "$(grep -v '^;\|^\[' "path_to_file/configuration.ini" | \ grep -c '\\some_word')" -ge 1 ]; then HAS_SOME_WORD=TRUEelse HAS_SOME_WORD=FALSEfi I am avoiding the lines starting by ";" or "[" as they must not be included in the desired search, while looking for the word "\some_word" . I want is to exclude the grep error messages when one of the two files does not exist, i.e: grep: path_to_file/config.ini: No such file or directory or: grep: path_to_file/configuration.ini: No such file or directory I have been able to avoid them by redirecting the output to /dev/null when executing the script: ./search_script.sh 2>/dev/null However I would like to include this redirection in the if code itself, not when invoking the script. How shall I implement that? Is there a more efficient way of doing what I'm trying? I have tried to add the -q parameter to grep in order to avoid the error messages printed, but it had no effect. Also tried adding 2>/dev/null redirection at the end of each if sentence, but I'm afraid that I haven't applied the correct syntax.
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed". I'm not sure if something similar is available on other distros. Not quite what you asked for. Possibly useful?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206914/" ] }
341,204
I've installed Arch on a partition, and installed grup as the wiki says grub-install /dev/sdagrub-mkconfig -o /boot/grub/grub.cfg Now I only get Archlinux and Advanced options and no Windows. Here's my parted -l output: Model: ATA Hitachi HTS54757 (scsi)Disk /dev/sda: 750GBSector size (logical/physical): 512B/4096BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 135MB 134MB bios_grub 2 135MB 269MB 134MB Mi msftres 3 269MB 86.2GB 85.9GB ntfs Ba msftdata 4 86.2GB 129GB 43.2GB ext4 misc 6 129GB 236GB 107GB ntfs msftdata 7 236GB 343GB 107GB ntfs msftdata 5 343GB 394GB 50.8GB ext4 8 394GB 403GB 9000MB linux-swap(v1) 9 403GB 507GB 103GB ext4 And lsblk -f : NAME FSTYPE LABEL UUID MOUNTPOINTsda |-sda1 |-sda2 |-sda3 ntfs Windows 02F00D3CF00D3785 /media/Windows|-sda4 ext4 25bc874b-1a89-4ff9-a01e-ca39e28155d9 |-sda5 ext4 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 /|-sda6 ntfs Programming 01CE50F6C84EAFE0 /media/Programming|-sda7 ntfs Entertainment 01CE50F6CC660CE0 /media/Entertainment|-sda8 swap 374052bf-9a06-4c34-a1dc-616967b6fe4f [SWAP]`-sda9 ext4 misc2 15b7261e-39a6-4668-9f22-a7c3096a6af5 /media/misc2sr0 My /boot/grub/grub.cfg content: ## DO NOT EDIT THIS FILE## It is automatically generated by grub-mkconfig using templates# from /etc/grub.d and settings from /etc/default/grub#### BEGIN /etc/grub.d/00_header ###insmod part_gptinsmod part_msdosif [ -s $prefix/grubenv ]; then load_envfiif [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=trueelse set default="0"fiif [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id"else menuentry_id_option=""fiexport menuentry_id_optionif [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=truefifunction savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi}function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi}if [ x$feature_default_font_path = xy ] ; then font=unicodeelseinsmod part_gptinsmod ext2set root='hd0,gpt5'if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 342ebed5-9592-4246-bdc2-4cd5c5ee92d5else search --no-floppy --fs-uuid --set=root 342ebed5-9592-4246-bdc2-4cd5c5ee92d5fi font="/usr/share/grub/unicode.pf2"fiif loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettextfiterminal_input consoleterminal_output gfxtermif [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=5# Fallback normal timeout code in case the timeout_style feature is# unavailable.else set timeout=5fi### END /etc/grub.d/00_header ###### BEGIN /etc/grub.d/10_linux ###menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-342ebed5-9592-4246-bdc2-4cd5c5ee92d5' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 else search --no-floppy --fs-uuid --set=root 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 fi echo 'Loading Linux linux ...' linux /boot/vmlinuz-linux root=UUID=342ebed5-9592-4246-bdc2-4cd5c5ee92d5 rw quiet echo 'Loading initial ramdisk ...' initrd /boot/intel-ucode.img /boot/initramfs-linux.img}submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-342ebed5-9592-4246-bdc2-4cd5c5ee92d5' { menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-342ebed5-9592-4246-bdc2-4cd5c5ee92d5' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 else search --no-floppy --fs-uuid --set=root 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 fi echo 'Loading Linux linux ...' linux /boot/vmlinuz-linux root=UUID=342ebed5-9592-4246-bdc2-4cd5c5ee92d5 rw quiet echo 'Loading initial ramdisk ...' initrd /boot/intel-ucode.img /boot/initramfs-linux.img } menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-342ebed5-9592-4246-bdc2-4cd5c5ee92d5' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 else search --no-floppy --fs-uuid --set=root 342ebed5-9592-4246-bdc2-4cd5c5ee92d5 fi echo 'Loading Linux linux ...' linux /boot/vmlinuz-linux root=UUID=342ebed5-9592-4246-bdc2-4cd5c5ee92d5 rw quiet echo 'Loading initial ramdisk ...' initrd /boot/intel-ucode.img /boot/initramfs-linux-fallback.img }}### END /etc/grub.d/10_linux ###### BEGIN /etc/grub.d/20_linux_xen ###### END /etc/grub.d/20_linux_xen ###### BEGIN /etc/grub.d/30_os-prober ###### END /etc/grub.d/30_os-prober ###### BEGIN /etc/grub.d/40_custom #### This file provides an easy way to add custom menu entries. Simply type the# menu entries you want to add after this comment. Be careful not to change# the 'exec tail' line above.### END /etc/grub.d/40_custom ###### BEGIN /etc/grub.d/41_custom ###if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfgelif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg;fi### END /etc/grub.d/41_custom ### Content of /etc/default/grub : GRUB_DEFAULT=0GRUB_TIMEOUT=5GRUB_DISTRIBUTOR="Arch"GRUB_CMDLINE_LINUX_DEFAULT="quiet"GRUB_CMDLINE_LINUX=""# Preload both GPT and MBR modules so that they are not missedGRUB_PRELOAD_MODULES="part_gpt part_msdos"# Uncomment to enable Hidden Menu, and optionally hide the timeout count#GRUB_HIDDEN_TIMEOUT=5#GRUB_HIDDEN_TIMEOUT_QUIET=true# Uncomment to use basic consoleGRUB_TERMINAL_INPUT=console# Uncomment to disable graphical terminal#GRUB_TERMINAL_OUTPUT=console# The resolution used on graphical terminal# note that you can use only modes which your graphic card supports via VBE# you can see them in real GRUB with the command `vbeinfo'GRUB_GFXMODE=auto# Uncomment to allow the kernel use the same resolution used by grubGRUB_GFXPAYLOAD_LINUX=keep# Uncomment if you want GRUB to pass to the Linux kernel the old parameter # format "root=/dev/xxx" instead of "root=/dev/disk/by-uuid/xxx" #GRUB_DISABLE_LINUX_UUID=true# Uncomment to disable generation of recovery mode menu entriesGRUB_DISABLE_RECOVERY=true# Uncomment and set to the desired menu colors. Used by normal and wallpaper # modes only. Entries specified as foreground/background.#GRUB_COLOR_NORMAL="light-blue/black"#GRUB_COLOR_HIGHLIGHT="light-cyan/blue"# Uncomment one of them for the gfx desired, a image background or a gfxtheme#GRUB_BACKGROUND="/path/to/wallpaper"#GRUB_THEME="/path/to/gfxtheme"# Uncomment to get a beep at GRUB start#GRUB_INIT_TUNE="480 440 1"#GRUB_SAVEDEFAULT="true"
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed". I'm not sure if something similar is available on other distros. Not quite what you asked for. Possibly useful?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11133/" ] }
341,243
I'm trying to have two layers of indirection, though let me know if I'm a victim of the XY problem Boiled Down Explanation of What I'd Like to Achieve > test1=\$test2> test2="string to print"> echo $test2string to print> echo $test1$test2 This all makes sense, but what I want is to perform a command using $test1 and print out string to print . My gut reaction was that this should work > echo $(echo $test1)$test2 Bollocks. Does anyone else know if this is possible? More Detailed Explanation of Why I Wish to Accomplish This I want to create a text file template containing $variables which can be re-used to generate many text files. My thinking is: set environment variables, process the template file and output to another file. Example: > #set vars> cat template.txt | magic_var_expansion_cmd > out1.txt> #set vars> cat template.txt | magic_var_expansion_cmd > out2.txt> #set vars> cat template.txt | magic_var_expansion_cmd > out3.txt This could obviously be used in a script, and in more sophisticated ways but this is the MVP in my mind's eye.
$ test1="hello"$ test2="test1"$ echo "${!test2}"hello From the bash manual: If the first character of parameter is an exclamation point ( ! ), and parameter is not a nameref, it introduces a level of variable indirection. Bash uses the value of the variable formed from the rest of parameter as the name of the variable; this variable is then expanded and that value is used in the rest of the substitution, rather than the value of parameter itself. This is known as indirect expansion. If parameter is a nameref, this expands to the name of the variable referenced by parameter instead of performing the complete indirect expansion. The exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described below. The exclamation point must immediately follow the left brace in order to introduce indirection. For the second part of the question, I would probably try to avoid using actual shell variables in the template, and instead use easy to parse placeholders and replace these using a tool like sed . There are a few similar questions around, including " How to replace placeholder strings in document with contents from a file ", " Tool to create text files from a template " and " How to replace a list of placeholders in a text file? " (and there are others too).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132881/" ] }
341,253
I want to automatically partition all of my workstations in the same way: First partition is a bootable 1GB ext4 /boot partition Second partition is a 2GB swap partition Third partition is an ext4 / partition that takes up whatever is left All partitions should be formatted I think adding this to my preseed.cfg will accomplish what I want: d-i partman-auto/workstation_recipe string \ root :: \ 1024 1023 1024 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 2048 2047 2048 linux-swap \ $primary{ } \ method{ swap } format{ } \ . \ 17408 100000000000 -1 ext4 \ $primary{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ . This is based on this blog . Will this do what I want, and is there anything else I need to add to my preseed.cfg to make it accept these instructions without user intervention? I have never used partman recipes before.
I figured this out after spending days scouring the internet for any shred of information about partman - it is not very well-documented at all. Here's the config I used: # This automatically creates a standard unencrypted partitioning scheme.d-i partman-auto/disk string /dev/sdad-i partman-auto/method string regulard-i partman-lvm/device_remove_lvm boolean trued-i partman-md/device_remove_md boolean trued-i partman-lvm/confirm boolean trued-i partman-lvm/confirm_nooverwrite boolean trued-i partman-auto/choose_recipe select unencrypted-installd-i partman-auto/expert_recipe string \ unencrypted-install :: \ 1024 1024 1024 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 2048 2048 2048 linux-swap \ $primary{ } \ method{ swap } format{ } \ . \ 17408 100000000000 -1 ext4 \ $primary{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ .d-i partman-md/confirm boolean trued-i partman-partitioning/confirm_write_new_label boolean trued-i partman/choose_partition select finishd-i partman/confirm boolean trued-i partman/confirm_nooverwrite boolean true Just drop that in your preseed and you should be good to go. Line by line: Use disk /dev/sda Do a regular install (not encrypted or LVM) Remove any existing LVM without prompting Remove any existing RAID setup without prompting Confirm that this is what you want Confirm again Select the "unencrypted-install" recipe, which is specified below This is a single logical line that specifies the entire recipe, one partition at a time. It creates the partition table exactly as I specified in the question. Confirm again Allow partman to write new labels Finish the process Confirm again Confirm again And there you go, works perfect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205533/" ] }
341,258
I export LC_ALL="en_US.UTF-8" (via sendEnv in ssh_config) when using ssh to remote systems. When I su - user123 this variable is reset by the login shell. Is there a way to preserve this variable (as well as other LC_xxx variables) when executing a login shell as another user on the remote system? I realize I could export the variable by hand after executing the shell, or an entry in ~/.bashrc of the target user, however I'd rather try to preserve the original values as sent by ssh if possible. Thanks. EDIT : I do need specific parts of the user's environment initialized which is why su - is used. I would only want to preserve LC_xxx
I found that the su implementations from util-linux and shadow-utils (and the one from GNU coreutils , though su was eventually removed from coreutils) have an option for preserving the environment: -m, -p, --preserve-environment Preserve the current environment, except for:... Other systems may have -m (like on BSDs) or -p or both or none. Busybox su has both -m and -p . Toybox su (Android) has -p . This way, the target user's shell init files are executed, just as they would be for a login shell, but any LC_xxx variables can be tested and not initialized if they contain a valid value already. EDIT: Just a note, I was able to apply this system-wide by adding a script in /etc/profile.d/ssh_lc_vars.sh which worked with the exported LC_xxx variables. I also had to do some extra work with the un-initialized environment variables which do not get handled with su -ml userxxx . Below is more of an example as I am not able to include the entire script. If someone can improve on it, all the better. ...# clean up client-side variable for junklc_sanitize(){ arg="$1" # first, strip underscores clean="${arg//_/}" # next, replace spaces with underscores clean="${clean// /_}" # now, clean out anything that's not alphanumeric, underscore, hypen or dot ret="${clean//[^a-zA-Z0-9_\.-]/}" # return santized value to caller echo "$ret"}# LC_MY_LANG comes from an ssh client environment. If empty,# this isn't a remote ssh user, but set it locally so this user# can connect elsewhere where this script runsif [ -z "$LC_MY_LANG" ]; then # force an LC_xxx setting for the environment LC_MY_LANG="en-US.utf-8"else # otherwise, use the LC_xxxx variable from the ssh environment # 2017-01-30 - when using "su --preserve-environment userxxx --login" be sure to fixup needed variables # shorthand: su -ml user111 export USER=`whoami` export LOGNAME=${USER} export HOME=$( getent passwd "$USER" | cut -d: -f6 ) cd ${HOME} # sanitize variable which was set client-side and log it u_sanitized=$(lc_sanitize "$LC_MY_LANG") echo "Notice: LC_MY_LANG sanitized to $u_sanitized from $SSH_CLIENT as user $USER" | logger -p auth.infofi# mark variable read-only so user cannot change it then export itreadonly LC_MY_LANG# set terminal to LC_MY_LANGexport LC_LANG=${LC_MY_LANG}export LC_MY_LANG...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109955/" ] }
341,388
What is the syntax to get the current month name (e.g. jan or feb) in a bash script?
You can use the date(1) command. For example: date +%b
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/341388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213363/" ] }
341,400
my centOS version is centos-release-6-6.el6.centos.12.2.x86_64 I have executed the following commands to extract and install glibc-2.15 tar zxvf glibc-2.14.tar.gzcd glibc-2.14mkdir buildcd build ../configure --prefix=/opt/glibc-2.14 make -j4make install But when I check glib version with command yum list glibc , it shows: Installed Packages glibc.i686 2.12-1.192.el6 @base glibc.x86_64 2.12-1.192.el6 @base
You can use the date(1) command. For example: date +%b
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/341400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213365/" ] }
341,406
On Arch Linux running GNOME, if I press Shift + 3 ,it locks X (nothing but the mouse cursor works). All window updates are suspended. The only option is to zap itwith Ctrl + Alt + Backspace . I've looked in logs, nothing.I've searched the web, nothing.I've tried every possible keypress, nothing. Shift + 2 works just fine,as does Shift + 4 . I'm on a Mac Pro, with a UK Apple keyboard. I don't think this should matter, but it is the £ (pound) symbol that comes out on the console before I run startx . In X, I can use Alt + Shift + 3 and get a pound without any problems. Alt + 3 gives me # as expected. Any ideas where to start with this? Is there extra logging I can somehow enable? xmodmap -pke gives: keycode 12 = 3 sterling 3 sterling numbersign sterling threesuperior sterling 3 sterling threesuperior sterling xev output. I pushed x then Shift + 3 , then 1 . Interestingly, it continued writing to the output after the DM froze. KeyPress event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 338011, (655,-7), root:(840,525),state 0x10, keycode 53 (keysym 0x78, x), same_screen YES,XLookupString gives 1 bytes: (78) "x"XmbLookupString gives 1 bytes: (78) "x"XFilterEvent returns: FalseKeyRelease event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 338091, (655,-7), root:(840,525),state 0x10, keycode 53 (keysym 0x78, x), same_screen YES,XLookupString gives 1 bytes: (78) "x"XFilterEvent returns: FalseKeyPress event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 339867, (655,-7), root:(840,525),state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES,XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 340219, (655,-7), root:(840,525),state 0x11, keycode 12 (keysym 0xa3, sterling), same_screen YES,XLookupString gives 2 bytes: (c2 a3) "£"XmbLookupString gives 2 bytes: (c2 a3) "£"XFilterEvent returns: FalseKeyRelease event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 340299, (655,-7), root:(840,525),state 0x11, keycode 12 (keysym 0xa3, sterling), same_screen YES,XLookupString gives 2 bytes: (c2 a3) "£"XFilterEvent returns: FalseKeyRelease event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 340411, (655,-7), root:(840,525),state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES,XLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 349763, (655,-7), root:(840,525),state 0x10, keycode 10 (keysym 0x31, 1), same_screen YES,XLookupString gives 1 bytes: (31) "1"XmbLookupString gives 1 bytes: (31) "1"XFilterEvent returns: FalseKeyRelease event, serial 36, synthetic NO, window 0xa00001,root 0x4a3, subw 0x0, time 349835, (655,-7), root:(840,525),state 0x10, keycode 10 (keysym 0x31, 1), same_screen YES,XLookupString gives 1 bytes: (31) "1"XFilterEvent returns: False
You can use the date(1) command. For example: date +%b
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/341406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32903/" ] }
341,413
I am using rsync to backup files from mac to external hard drive. For example: rsync -av --delete ~/Pictures/ "/Volumes/My Passport/Mac Backups/Pictures" Before I do it, I run the same command in dry-run, by adding -n option: rsync -avn --delete ~/Pictures/ "/Volumes/My Passport/Mac Backups/Pictures" As I understand, I should see only the difference what will be copied/changed, but for some reason all folders and files are getting printed out. Though if I test the same command locally with 2 folders, I can see only the difference what has changed. Why it is so and how can I fix it? Updated: After adding -i option (thanks to Learner answer) I was able to identify why all the files are getting listed. It seems that permissions are not being copied. All folders (and files) have this: .d...p... my folder/ I added -p (and -o , -g ) option that should copy the permissions, but still no luck. Any ideas?
As Thomas pointed out the problem is to do with the Format of the hard drives (proposed solution --chmod didn't work for me). My external hdd is ExFAT and Mac's hdd is Mac OS Extended . So I did some googling and found the solution here >> a: archive, replaces the rlptgoD switches (recurse dirs, preserve symlinks, preserve permissions, preserves modification times, preserve groups, preserve owner and preserve Device files). The problem is that the Linux exFAT does not cope well with the switches that relate to permissions (the pgo), so the solution is to run rsync with the following switches, removing the p, g and o: So the answer to my question is: rsync -rltDvn --delete ~/Pictures/ "/Volumes/My Passport/Mac Backups/Pictures" Now I can see only changed files. Thanks everyone for the help.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213386/" ] }
341,429
I have many text files in a directory, and I want to remove the last line of every file in the directory. How can I do it?
You can use this nice oneliner if you have GNU sed . sed -i '$ d' ./* It will remove the last line of each non-hidden file in the current directory. Switch -i for GNU sed means to operate in place and '$ d' commands the sed to delete the last line ( $ meaning last and d meaning delete).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341429", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211253/" ] }
341,431
I have a script in a gnome-terminal shell and I would like to open a new terminal, load the bashrc configuration, execute a new script and avoid the closure of the new terminal window. I have tried to execute this commands: gnome-terminal -x bash the script above open a new shell and loads bashrc, but I don't know how to execute a script automatically. gnome-terminal -x ./new_script.sh the script above open a new shell and execute the script but doesn't load bashrc and close the window. The result that I would like to obtain is to feel like opening a new terminal as clicking the term icon but execute a script after the bashrc setup.
You can use this nice oneliner if you have GNU sed . sed -i '$ d' ./* It will remove the last line of each non-hidden file in the current directory. Switch -i for GNU sed means to operate in place and '$ d' commands the sed to delete the last line ( $ meaning last and d meaning delete).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341431", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206447/" ] }
341,454
I can't find a list of keyboard shortcuts for Evince. I checked Gnome wiki and askubuntu. Does anybody happen to know the shortcut for presentation mode or can point me to a list of shortcuts?
F5 for presentation. F11 for fullscreen. The shortcuts are next to the items in the GUI menu for my Evince version (3.18.2).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142329/" ] }
341,458
I have a huge file (420 GB) compressed with gzip and I want to decompress it, but my HDD doesn't have space for storing the whole compressed file and its contents. Would there be a way of decompressing it 'while deleting it'? In case it helps, gzip -l says that there is only a file inside (which is a tar file which I will also have to separate somehow) Thanks in advance!
Would there be a way of decompressing it 'while deleting it'? This is what you asked for. But it may not be what you really want. Use at your own risk. If the 420GB file is stored on a filesystem with sparse file and punch hole support (e.g. ext4 , xfs , but not ntfs ), it would be possible to read a file and free the read blocks using fallocate --punch-hole . However, if the process is cancelled for any reason, there may be no way to recover since all that's left is a half-deleted, half-uncompressed file. Don't attempt it without making another copy of the source file first. Very rough proof of concept: # dd if=/dev/urandom bs=1M count=6000 | pigz --fast > urandom.img.gz6000+0 records in6000+0 records out6291456000 bytes (6.3 GB, 5.9 GiB) copied, 52.2806 s, 120 MB/s# df -h urandom.img.gz Filesystem Size Used Avail Use% Mounted ontmpfs 7.9G 6.0G 2.0G 76% /dev/shm urandom.img.gz file occupies 76% of available space, so it can't be uncompressed directly. Pipe uncompressed result to md5sum so we can verify later: # gunzip < urandom.img.gz | md5sumbc5ed6284fd2d2161296363edaea5a6d - Uncompress while hole punching: (this is very rough without any error checking whatsoever) total=$(stat --format='%s' urandom.img.gz) # bytestotal=$((1+$total/1024/1024)) # MiBfor ((offset=0; offset < $total; offset++))do # read block dd bs=1M skip=$offset count=1 if=urandom.img.gz 2> /dev/null # delete (punch-hole) blocks we read fallocate --punch-hole --offset="$offset"MiB --length=1MiB urandom.img.gzdone | gunzip > urandom.img Result: # ls -alh *-rw-r--r-- 1 root root 5.9G Jan 31 15:14 urandom.img-rw-r--r-- 1 root root 5.9G Jan 31 15:14 urandom.img.gz# du -hcs *5.9G urandom.img0 urandom.img.gz5.9G total# md5sum urandom.imgbc5ed6284fd2d2161296363edaea5a6d urandom.img The checksum matches, the size of the source file reduced from 6GB to 0 while it was uncompressed in place. But there are so many things that can go wrong... better don't do it at all or if you really have to, at least use a program that does saner error checking. The loop above does not guarantee at all that the data was read and processed before it gets deleted. If dd or gunzip returns an error for any reason, fallocate still happily tosses it... so if you must use this approach better write a saner read-and-eat program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213416/" ] }
341,485
I am trying to dual boot two Linux distributions. My machine is running MintOs and I am trying to install kali Linux. I need to know whether I have to create a new swap area or can I use the old one.
Only one OS should be running at any given time so sharing the same swap partition between the two OS should be fine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180127/" ] }
341,513
I wanted to try using TOR on my new Linux Mint 18.1 installation. So I apt-get install ed torbrowser-launcher and tor , then ran torbrowser-launcher . It opened a dialog box and showed me it was downloading the TOR browser; but when it was done, it said it had failed the signature check and that I may be "under attack" (oh my!). Now, it's quite unlikely I'm under some attack personally (I'm not important enough for that), so I'm guessing either it's some technical glitch, or, what would be possible although far far less likely, a man-in-the-middle attack covering my ISP rather than myself individually, nefarious government surveillance or what-not. How can I tell? What should I do? By the way, the URLs downloaded are: https://dist.torproject.org/torbrowser/6.5/tor-browser-linux64-6.5_en-US.tar.xz.asc https://dist.torproject.org/torbrowser/6.5/tor-browser-linux64-6.5_en-US.tar.xz
It's not an attack, just an outdated key. There's a issue report on this matter over at the GitHub repository . A workaround reported there, which works for some systems if not all, is to run: gpg --homedir "$HOME/.local/share/torbrowser/gnupg_homedir/" --refresh-keys --keyserver pgp.mit.edu before torbrowser-launcher . Then it works. It's quite possible that what Kusalananda suggested would also work, but I can't check that unless I undo the key update.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
341,524
What is the benefit of executing bash command? In my Terminal Window, nothing visible to the eye happens. I noticed that the $SHLVL gets incremented, but beside that I wouldn't know that bash was executed. Additionally running bash --help doesn't really tell much. I know that bash is one of the shells available, but if you are already using Bourne Again shell, I see no benefits of nesting it. In what scenarios should I execute bash ?
It's not an attack, just an outdated key. There's a issue report on this matter over at the GitHub repository . A workaround reported there, which works for some systems if not all, is to run: gpg --homedir "$HOME/.local/share/torbrowser/gnupg_homedir/" --refresh-keys --keyserver pgp.mit.edu before torbrowser-launcher . Then it works. It's quite possible that what Kusalananda suggested would also work, but I can't check that unless I undo the key update.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213404/" ] }
341,533
Having byte offset for a file. Is there a tool that gives line number for this byte? Byte count starting with zero, as in: first byte is 0 not 1. Line number starting with 1. File can have both plain text, "binary" blobs, multibyte characters etc. But the section I am interested in: End of file, has only ASCII. Example, file: 001002003 <<-- first zero on this line is byte 8004 Having byte offset 8 that would give me line 3 . Guess I could use something like this to find line number: a. tail -c+(offset + 1) file | wc -l , here +1 as tail counts from 1. b. wc -l file c. Then tail -n+num where num is a - b + 1 But ... is there a, fairly common, tool that can give me num directly? Edit, err: or the more obvious: head -c+offset file | wc -l
In your example, 001002003004 byte number 8 is the second newline, not the 0 on the next line. The following will give you the number of full lines after $b bytes: $ dd if=data.in bs=1 count="$b" | wc -l It will report 2 with b set to 8 and it will report 1 with b set to 7. The dd utility, the way it's used here, will read from the file data.in , and will read $b blocks of size 1 byte. As "icarus" rightly points out in the comments below, using bs=1 is inefficient. It's more efficient, in this particular case, to swap bs and count : $ dd if=data.in bs="$b" count=1 | wc -l This will have the same effect as the first dd command, but will read only one block of $b bytes. The wc utility counts newlines, and a "line" in Unix is always terminated by a newline. So the above command will still say 2 if you set b to anything lower than 12 (the following newline). The result you are looking for is therefore whatever number the above pipeline reports, plus 1. This will obviously also count the random newlines in the binary blob part of your file that precedes the ASCII text. If you knew where the ASCII bit starts, you could add skip="$offset" to the dd command, where $offset is the number of bytes to skip into the file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98945/" ] }
341,534
I am running something in a bash window that I don't want to interrupt or even suspend momentarily. Is it possible to view command history of that particular window's session? I have multiple windows open, so viewing .bash_history won't help much.
No, bash doesn't support that. The history is kept in memory and not available for other processes until it is saved to .bash_history in the same session using history -a or history -w . But the moment it's written to the file system, the information from which session the command originated is lost. The closest you can get is using some lines in .bashrc to let bash append every command directly after execution: https://unix.stackexchange.com/a/1292/147970 Then you can see the commands from all shells in near real-time in .bash_history . To access the history for a specific session you need to interrupt the foreground process in that session using e.g. Ctrl+Z .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112656/" ] }
341,585
I downloaded a torrent file http://cdimage.debian.org/cdimage/stretch_di_rc1/amd64/bt-cd/debian-stretch-DI-rc1-amd64-netinst.iso.torrent Now I want to parse/read it so that I can find out things like - a. Which software was used to create the torrent file ? b. The size of the iso image, the size and number of pieces c. Number of trackers which the iso image. All of which is meta-data. I guess I'm looking for what mediainfo is for a media file - [$] mediainfo Big_Buck_Bunny_small.ogv GeneralID : 30719 (0x77FF)Complete name : Big_Buck_Bunny_small.ogvFormat : OggFile size : 2.65 MiBDuration : 1 min 19 sOverall bit rate mode : VariableOverall bit rate : 280 kb/sWriting application : ffmpeg2theora-0.25SOURCE_OSHASH : cc9e38e85baf7573VideoID : 20319 (0x4F5F)Format : TheoraDuration : 1 min 19 sBit rate : 212 kb/sNominal bit rate : 238 kb/sWidth : 240 pixelsHeight : 134 pixelsDisplay aspect ratio : 16:9Frame rate : 24.000 FPSCompression mode : LossyBits/(Pixel*Frame) : 0.275Stream size : 2.01 MiB (76%)Writing library : Xiph.Org libtheora 1.1 20090822 (Thusnelda)AudioID : 13221 (0x33A5)Format : VorbisFormat settings, Floor : 1Duration : 1 min 19 sBit rate mode : VariableBit rate : 48.0 kb/sChannel(s) : 2 channelsSampling rate : 48.0 kHzCompression mode : LossyStream size : 465 KiB (17%)Writing library : libVorbis 20090709 (UTC 2009-07-09) Is there something similar ? I am looking for a CLI tool .
transmission has a tool for that $ transmission-show debian-stretch-DI-rc1-amd64-netinst.iso.torrent Name: debian-stretch-DI-rc1-amd64-netinst.isoFile: debian-stretch-DI-rc1-amd64-netinst.iso.torrentGENERAL Name: debian-stretch-DI-rc1-amd64-netinst.iso Hash: 13d51b233d37965a7137dd65858d73c5a2e7ded4 Created by: Created on: Fri Jan 13 12:29:09 2017 Comment: "Debian CD from cdimage.debian.org" Piece Count: 1184 Piece Size: 256.0 KiB Total Size: 310.4 MB Privacy: Public torrentTRACKERS Tier #1 http://bttracker.debian.org:6969/announceFILES debian-stretch-DI-rc1-amd64-netinst.iso (310.4 MB) Another one would be intermodal which besides showing metadata can also create and verify it: https://rodarmor.com/blog/intermodal
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
341,590
I am trying to use ping with specified interface with a command ping -I re3 192.168.1.1 I know re3 exists from ifconfig re3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE> ether e8:de:27:01:7f:e7 inet6 fe80::eade:27ff:fe01:7fe7%re3 prefixlen 64 scopeid 0x4 inet 192.168.1.2 netmask 0xffffff00 broadcast 192.168.1.255 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> media: Ethernet autoselect (100baseTX <full-duplex>) status: active Unfortunately I can't ping it's gateway: $/root: ping -I re3 192.168.1.1ping: invalid multicast interface: `re3' What that mean? UPDATE $arp 192.168.1.1? (192.168.1.1) at (incomplete) on re3 expired [ethernet]
Not much experience in freebsd system, as far as i know for ping try: ping -S 192.168.1.2 192.168.1.1 As for arp If arp cant fetch mac address of your gateway then freebsd serverlost it's connectivity with the gateway. Check whether gateway of your server is UP/RUNNING, and also checkthe physical connectivty of both.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
341,611
I'm using Slackware. Firefox is running. I also have a virtual machine running Ubuntu 16.04 using VirtualBox. I've installed Firefox on the virtual machine, and Firefox is installed on the host computer. I opened an SSH session in the virtual machine and ran Firefox. It opened a new window of my host computer's Firefox. Why did it do this? I was expecting two running instances of Firefox: one on my host computer and one on the virtual machine.
When Firefox starts, it looks for a Firefox window running on the same display, and if it finds one, it focuses this window (and if you pass a URL on the command line, it opens a new tab to load the URL in the existing window). You must have run SSH with X11 display forwarding. Since X11 forwarding is active, all GUI programs that you start in the SSH session will be displayed on the local machine. If you X11 forwarding was not active in the SSH connection, then GUI applications run from the SSH session would have nowhere to display. They'd just complain “Error: no display specified” or some similar error message. X11 is inherently network-transparent, so it doesn't have a notion of “the local display”. The display is whatever you tell the application is the display. There can be multiple local displays, e.g. in the case of a multiseat configuration. There isn't one “true” display like there is with Windows. If you're running a program remotely and you want it to display on the monitor of the remote machine, you need to run an X server on the remote machine and you need to explicitly tell the program to connect to that display. By default, if you do nothing, programs will be displayed on the machine that you're in front of.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/341611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
341,629
Wanting to play around with Trusted Platform Module stuff, I installed TrouSerS and tried to start tcsd , but I got this error: TCSD TDDL ERROR: Could not find a device to open! However, my kernel has multiple TPM modules loaded: # lsmod | grep tpmtpm_crb 16384 0tpm_tis 16384 0tpm_tis_core 20480 1 tpm_tistpm 40960 3 tpm_tis,tpm_crb,tpm_tis_core So, how do I determine if my computer is lacking TPM vs TrouSerS having a bug? Neither dmidecode nor cpuid output anything about "tpm" or "trust". Looking in /var/log/messages , on the one hand I see rngd: /dev/tpm0: No such file or directory , but on the other hand I see kernel: Initialise system trusted keyrings and according to this kernel doc trusted keys use TPM. EDIT : My computer's BIOS setup menus mention nothing about TPM. Also, looking at /proc/keys : # cat /proc/keys ******** I--Q--- 1 perm 1f3f0000 0 65534 keyring _uid_ses.0: 1******** I--Q--- 7 perm 3f030000 0 0 keyring _ses: 1******** I--Q--- 3 perm 1f3f0000 0 65534 keyring _uid.0: empty******** I------ 2 perm 1f0b0000 0 0 keyring .builtin_trusted_keys: 1******** I------ 1 perm 1f0b0000 0 0 keyring .system_blacklist_keyring: empty******** I------ 1 perm 1f0f0000 0 0 keyring .secondary_trusted_keys: 1******** I------ 1 perm 1f030000 0 0 asymmetri Fedora kernel signing key: 34ae686b57a59c0bf2b8c27b98287634b0f81bf8: X509.rsa b0f81bf8 []
TPMs don't necessarily appear in the ACPI tables, but the modules do print a message when they find a supported module; for example [ 134.026892] tpm_tis 00:08: 1.2 TPM (device-id 0xB, rev-id 16) So dmesg | grep -i tpm is a good indicator. The definitive indicator is your firmware's setup tool: TPMs involve ownership procedures which are managed from the firmware setup. If your setup doesn't mention anything TPM-related then you don't have a TPM. TPMs were initially found in servers and business laptops (and ChromeBooks, as explained by icarus ), and were rare in desktops or "non-business" laptops; that’s changed over the last few years, and Windows 11 requires a TPM now. Anything supporting Intel TXT has a TPM.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/341629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41033/" ] }
341,740
I have a strings: AddDataTestSomethingTellMeWhoYouAre and so on. I want to add space before uppercase letters. How can I do it?
Using sed , and assuming you don't want a space in front of the word: $ sed 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' file.inAdd DataTest SomethingTell Me Who You Are The substitution will look for an upper-case letter immediately following a another non-whitespace character, and insert a space in-between the two. For strings with more than one consecutive upper-case character, like WeAreATeam , this produces We Are ATeam . To sort this, run the substitution a second time: $ sed -e 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' \ -e 's/\([^[:blank:]]\)\([[:upper:]]\)/\1 \2/g' file.in
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213632/" ] }
341,854
I am running CentOS 7 and need to mount an NFS share which is protected by credentials. I have read the nfs, mount, mount.nfs manuals and can't find the right options that work! I think the right options are 'user' and 'pass', but I've tried 'username' and 'password' and everything inbetween, but I get: mount -t nfs -o user=root,pass=mypass lserver:/root /mnt/d0mount.nfs: an incorrect mount option was specified Can someone tell me the right syntax/options to make this work? (It really shouldn't be this hard)
Specifying username and password are options for cifs (samba) , but not nfs . According to this RHEL Documentation : NFS controls who can mount an exported file system based on the host making the mount request, not the user that actually uses the file system. Hosts must be given explicit rights to mount the exported file system. Access control is not possible for users, other than through file and directory permissions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/341854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23091/" ] }
341,989
I checked the od man page, but it does not explain this. What do the numbers on the left column of the od output mean?
This is actually mentioned in the info page for od (available by running info od or by visiting https://www.gnu.org/software/coreutils/manual/html_node/od-invocation.html#od-invocation which is also linked to at the end of the man page) file, albeit not in very much depth: Each line of output consists of the offset in the input, followed by groups of data from the file. By default, od prints the offset in octal, and each group of file data is a C short int’s worth of input printed as a single octal number. So, in your output, the numbers shown are octal 0000000, 0000020 and 0000030 which are decimal 0, 16 and 24. Note that the n of the word written is the 17th character (byte, here) of the file, therefore it can be found by beginning to read with an offset of 16 and the final newline is the 24th, so the next (empty) line of output starts with an offset of 24.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/341989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164532/" ] }
342,008
I used this a lot, the improvement I try to achieve is to avoid echo file names that did not match in grep. Better way to do this? for file in `find . -name "*.py"`; do echo $file; grep something $file; done
find . -name '*.py' -exec grep something {} \; -print would print the file name after the matching lines. find . -name '*.py' -exec grep something /dev/null {} + would print the file name in front of every matching line (we add /dev/null for the case where there's only one matching file as grep doesn't print the file name if it's passed only one file to look in. The GNU implementation of grep has a -H option for that as an alternative). find . -name '*.py' -exec grep -l something {} + would print only the file names of the files that have at least one matching line. To print the file name before the matching lines, you could use awk instead: find . -name '*.py' -exec awk ' FNR == 1 {filename_printed = 0} /something/ { if (!filename_printed) { print FILENAME filename_printed = 1 } print }' {} + Or call grep twice for each file - though that'd be less efficient as it would run at least one grep command and up to two for each file (and read the content of the file twice): find . -name '*.py' -exec grep -l something {} \; \ -exec grep something {} \; In any case, you don't want to loop over the output of find like that and remember to quote your variables . If you wanted to use a shell loop, with GNU tools: find . -name '*.py' -exec grep -l --null something {} + | xargs -r0 sh -c ' for file do printf "%s\n" "$file" grep something < "$file" done' sh (also works on FreeBSD and derivatives).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/342008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148706/" ] }
342,019
Using sudo apt-get install afl gives E: Unable to locate package afl on my machine. How can I install American fuzzy lop on Ubuntu?
Alright, it wasn't that hard. I just cloned the source code from a mirror, make 'd and make install 'ed: Clone Git repository of afl , I use a mirror I found on GitHub: git clone https://github.com/mirrorer/afl Change directory and make and make install : cd aflmake && sudo make install Of course, there might be some libraries you need to install in order to compile. I did not need to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206621/" ] }
342,020
I have a large csv file looking a little like this: SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,2,SomeData SomeData,SomeData,3,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,2,SomeData SomeData,SomeData,3,SomeData SomeData,SomeData,4,SomeData SomeData,SomeData,5,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData SomeData,SomeData,1,SomeData I want to create a new csv file which only contains rows where the 3rd value is part of a set i.e. if the value in 3rd field of the line below is one higher, then I want both those lines to be included. So, in the example above I only want rows 2-4 and 8-12 to be saved in a new file. I'm struggling to work out how to tell grep to look for that pattern. Any ideas? Thanks
Alright, it wasn't that hard. I just cloned the source code from a mirror, make 'd and make install 'ed: Clone Git repository of afl , I use a mirror I found on GitHub: git clone https://github.com/mirrorer/afl Change directory and make and make install : cd aflmake && sudo make install Of course, there might be some libraries you need to install in order to compile. I did not need to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213848/" ] }
342,053
I have lines like this in a file x;x;x;x;x;x;cmd="lbk_addcolumn TABLE_NAME_1 COLUMN_X";x;x;x;xx;x;x;x;x;x;cmd="lbk_dropcolumn TABLE_NAME_2 COLUMN_Y";x;x;x;x I wish to replace the field with "cmd" in the file to something like this : x;x;x;x;x;x;cmd="ColumnAdded TABLE_NAME_1 COLUMN_X || lbk_addcolumn TABLE_NAME_1 COLUMN_X";x;x;x;xx;x;x;x;x;x;cmd="ColumnDropped TABLE_NAME_2 COLUMN_Y || lbk_dropcolumn TABLE_NAME_2 COLUMN_Y ";x;x;x;x How can I do this?
Alright, it wasn't that hard. I just cloned the source code from a mirror, make 'd and make install 'ed: Clone Git repository of afl , I use a mirror I found on GitHub: git clone https://github.com/mirrorer/afl Change directory and make and make install : cd aflmake && sudo make install Of course, there might be some libraries you need to install in order to compile. I did not need to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213850/" ] }
342,063
I have a large file which has messages that are separated by a <> tag. I need to fetch the entire tag (with value). Please see example below: <tvd:HostProductListStatus>000</tvd:HostProductListStatus><tvd:BeefProductListStatus>000</tvd:BeefProductListStatus><tvd:CustomerBranding>CC</tvd:CustomerBranding><tvd:InquiryAllowed>true</tvd:InquiryAllowed> I need to just fetch and display only the following tag:value from the file, regardless if it appears more than once: <tvd:BeefProductListStatus>000</tvd:BeefProductListStatus> What would be command to do that?
Alright, it wasn't that hard. I just cloned the source code from a mirror, make 'd and make install 'ed: Clone Git repository of afl , I use a mirror I found on GitHub: git clone https://github.com/mirrorer/afl Change directory and make and make install : cd aflmake && sudo make install Of course, there might be some libraries you need to install in order to compile. I did not need to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213874/" ] }
342,064
I'm using Ranger to navigate around my file system. Is there a shortcut where I cd into a folder without leaving Ranger (as in open bash with a location of a folder found by navigating in Ranger) ?
I found the answer to this in the man pages : S Open a shell in the current directory Yes, probably should have read through that before asking here.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/342064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
342,106
I'm currently trying to get ssh-agent to work.No matter what I'm doing, I just can't get the around the password prompt. For testing I even tried connecting to localhost: ssh-keygen to generate the id_rsa ssh-add id_rsa in the .ssh folder ssh-add -l shows the correct fingerprint ssh user@localhost still asks me for a password eval $(ssh-agent -s) shows the process running Is there something else I need to configure before using the ssh-agent?I tried it with several machines and users, as well as RSA and DSA keys. I'm using Debian 7 btw. I would appreciate if someone could give me a hint, where my problem might be.
You generated a ssh key. That alone doesn't enable public key authentication, you also need to add the public key to the file ~/.ssh/authorized_keys on the remote machine, to the account you want to log to. The easy way to do that is with ssh-copy-id : ssh-copy-id hostname or ssh-copy-id username@hostname if the username on the remote host is different from the one on the current machine. This will ask for your password on the remote machine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210618/" ] }
342,158
I run OS/X on my personal machine and Linux on my remote servers. (I could never afford an Xserve even back when they still made 'em.) I'd love to use the same .bashrc file for both and I want to have ls show me colors so I can identify executable files, directories, and the like. But the GNU ls on Linux wants to see the --colors command line switch or it will refuse to show colors. Apple's (BSD?) ls prefers to see the export CLICOLORS=1 shell variable to tell it to show colors. I'd do both, but the Apple ls barfs if it sees an unknown --colors switch. Is there are good way in .bashrc for me to detect whether ls accepts a switch and then decide whether to alias it with --colors or not?
When you're trying to do something portably, test for features, not platforms: if ls --help 2>&1 | grep -q -- --colorthen alias ls='ls --color=auto -F'else alias ls='ls -FG'fi Platform tests break when platforms change. macOS ships a mix of BSD and GNU userland tools today, but this mix is shifting over time towards a greater preponderance of BSD tools. So, a test for "macOS" today may fail tomorrow when Apple replaces a GNU tool you were depending on with its nearest BSD equivalent if you are relying on a feature the two implement differently. Feature tests more often keep working in the face of change. As a bonus, you sometimes end up creating support for platforms you did not originally test against. The above script fragment should also do the right thing on Solaris and FreeBSD, for example. (This is the philosophy behind GNU Autoconf, by the way, which is why a configure script written 10 years ago probably still works on a brand new system today.) Modify the aliases to suit. I'm just showing the values I use on the macOS and Linux systems nearest to hand as I write this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57781/" ] }
342,348
Over many years I used ksh . I like the possibility to use Esc v in command history to call the editor "vi". If this command in the history was spread over many lines - for example because of a while loop - "vi" shows this history also over many lines. With this feature it is easy possible to write complex statements without writing the input to a file. Years ago I changes to bash . It has the same possibility with default shortcut Ctrl - X Ctrl - E . The slightly difference is that bash is merging all lines to one long line delimited with a semicolon. Syntax is still correct but we lose readability. So what I am doing, I am calling ksh, if I see the commands become complex. Is there a way to configure bash in a way not to merge the lines of the history and act as ksh is doing it? Any help is welcome.
Use: shopt -s lithist lithist If set, and the cmdhist option is enabled, multi-line commands are saved to the history with embedded newlines rather than using semicolon separators where possible. I suspect the reason this isn't enabled by default is because people often use commands like history | grep something to find history entry numbers. If a history entry is split across multiple lines, the line that matches grep won't always contain the entry number.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148885/" ] }
342,351
I have a debian 8.5 computer. In order to create new session I run command startx . With this command a new session is created. How can I via command close this session and to return to previous one?
Kill the master process of the X session. The master process is the one that started out in life as the child of xinit , i.e. ~/.xinitrc (which is typically a shell script). Usually the last thing .xinitrc does is to call a window manager or a session manager (e.g. twm , fvwm , gnome-session , …). To remember the process ID, you can put it in an environment variable. For example, I have this in my .xinitrc : export XSESSION_PID="$$"…exec my-favorite-window-manager This way, I can exit by using my-favorite-window-manager's “exit” command, or by running kill $XSESSION_PID from any shell in this X session. Alternatively, if you're modern enough to run D-Bus and a D-Bus aware window/session manager, you can let it know that you want to log out by sending it a command over D-Bus. See Universal way to logout from terminal via dbus
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139986/" ] }
342,353
I have installed Arch Linux with LUKS on a btrfs file system. When logging in,I can't mount my filesystem on /dev/sda2 because the keyboard is US (I need a French key map). I try change /etc/vconsole.conf to FR and generate locale-gen but the keyboard don't change in next boot.
Your /etc/mkinitcpio.conf needs to look like this: HOOKS="... keyboard keymap encrypt..." You need to load the keymap during boot, which is done by an mkinitcpio hook. Make sure that the keymap or sd-vconsole hook (depending on whether you use sd-* style hooks) occurs before encrypt/sd-encrypt and regenerate your initrd.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214034/" ] }
342,401
I have a process running very long time. I accidentally deleted the binary executable file of the process. Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else.... How can I get recover it? (I use CentOS 7, the running process is written in C++)
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However! $ cat hamlet.c#include <unistd.h>int main(void) { while (1) { sleep(9999); } }$ gcc -o hamlet hamlet.c$ md5sum hamlet30558ea86c0eb864e25f5411f2480129 hamlet$ ./hamlet &[1] 2137$ rm hamlet$ cat /proc/2137/exe > newhamlet$ md5sum newhamlet 30558ea86c0eb864e25f5411f2480129 newhamlet$ With interpreted programs, obtaining the script file may be somewhere between tricky and impossible, as /proc/$$/exe will point to perl or whatever, and the input file may already have been closed: $ echo sleep 9999 > x$ perl x &[1] 16439$ rm x$ readlink /proc/16439/exe/usr/bin/perl$ ls /proc/16439/fd0 1 2 Only the standard file descriptors are open, so x is already gone (though may for some time still exist on the filesystem, and who knows what the interpreter has in memory).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146221/" ] }
342,403
Can't install Java8 apt-get install openjdk-8-jre-headlessReading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: openjdk-8-jre-headless : Depends: ca-certificates-java but it is not going to be installedE: Unable to correct problems, you have held broken packages I've searched Google and I've added repos and other suggestions, but nothing has allowed me to install Java 8 yet. ideas? lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux 8.7 (jessie)Release: 8Codename: jessie
is this jessie? With backports apt install -t jessie-backports openjdk-8-jre-headless ca-certificates-java
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/342403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47368/" ] }
342,418
I have searched, but have come up short with an answer. I want to print the last line (or record) in a file; then print the first line using only awk. I know how to print first line: NR == 1{print} and last line END{print} However, how can I switch the positions? Is it even possible? I only get errors when I try to do so. Is there a way to integrate the NR == 1{print} command into the END command? Again, I only want to perform this in awk. Thanks!
Just save the first line to a variable awk 'NR==1 {first = $0} END {print; print first}' file Ex. given file as line 1line 2line 3line 4line 5 then $ awk 'NR==1 {first = $0} END {print; print first}' fileline 5line 1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211625/" ] }
342,463
I am trying to mount root and boot partition of Raspbian image: mount -v -o offset="70254592" -t ext4 /mnt/X/raspbian-jessie.img /tmp/raspbianmount -v -o offset="4194304" -t vfat /mnt/X/raspbian-jessie.img /tmp/boot mounting boot, when root is mounted results in: mount: /mnt/X/raspbian-jessie.img: overlapping loop device exists How to mount multiple partitions on one disk image at same time? (for disks it's obviously possible, why not for files?)
You need to specify the length of the partition(s) to avoid overlap. Option sizelimit , see man mount , man losetup .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
342,489
Using the command line tools available in a common GNU/Linux distro (e.g. Fedora/Debian/Ubuntu/etc), is there a general way to get the value of some specific WHOIS field (e.g. the registrant's organisation name), ideally without having to build a custom WHOIS parser that is hard-coded to handle the differences between each registry's output? This seems worth asking, because the output from the whois command does not appear to be very consistent. For example, compare: $ whois trigger.io[...]Owner OrgName : Amir Nathoo[...] with: $ whois facebook.com[...]Registrant Organization: Facebook, Inc.[...] I would like, instead, to be able to pass, as arguments to some command: the domain name the desired field and have the output simply be the value of the desired field. For instance, based on the examples above, something like: $ some_whois_command -field organization_name trigger.ioAmir Nathoo$ some_whois_command -field organization_name facebook.comFacebook, Inc. Is this possible? Ideally, I would like the solution to centre on the whois command, e.g. with some suitable usage of -i , -q , -t , and/or -v , as I want to learn how to make effective use of these options. I will accept another solution as correct if necessary, however.
The problem appears to be at least two-fold: WHOIS responses do not share a common schema, and there is a dearth of WHOIS clients able to parse WHOIS responses and to map their fields (e.g. using a suitable ontology) onto a single schema. The Ruby Whois project is the most extensive effort I have found. It aims to provide a parser for each of the 500+ different WHOIS servers , and its developers deserve immense credit, but it remains a work in progress. This is a sorry state of affairs. The IETF's proposed solution for this and other WHOIS woes is called the Registration Data Access Protocol (RDAP) . Quoting RFC 7485 , which explains the rationale for RDAP: In the domain name space, there were over 200 country code Top-Level Domains (ccTLDs) and over 400 generic Top-Level Domains (gTLDs) when this document was published. Different Domain Name Registries may have different WHOIS response objects and formats. A common understanding of all these data formats was critical to construct a single data model for each object. (Emphasis mine.) Unfortunately, whereas most (all?) TLD registries provide WHOIS servers for their subdomains, only one two TLD registries have so far formally fielded RDAP servers for their subdomains : CZNIC for .cz domains, and NIC Argentina for .ar domains. So, this is not (yet) a generally applicable solution across a wide range of TLDs. We can only hope that all the other registries will hurry up and field RDAP servers. As for software, the only RDAP command line client for POSIX systems that I have found so far is nicinfo .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
342,501
Say it's very easy if I want to find something containing lower-case letters and numbers with produce_text | grep -E '[0-9a-z]' Brackets are useful to match a set of characters, but what about those that are somewhat special? If I want to, using brackets, match any character but one of these: a closing bracket ] , a dash (or hyphen) "-", both slashes / and \ , a caret ^ , a colon : . Will it look like this (I know this doesn't work)? [^]-/\^:]
To match a literal ] and a literal - in a Bracket Expression you'll have to use them like this: [^]/\^:-] or, even better, since some tools require the backslash to be escaped : [^]/\\^:-] that is The right-square-bracket ( ']' ) shall lose its special meaning and represent itself in a bracket expression if it occurs first in the list (after an initial '^', if any) and The hyphen-minus character shall be treated as itself if it occurs first (after an initial '^', if any) or last in the list hence If a bracket expression specifies both '-' and ']', the ']' shall be placed first (after the '^', if any) and the '-' last within the bracket expression. The rules for bracket expressions are the same for ERE and BRE .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211239/" ] }
342,554
My keyboard has dedicated keys to change the audio volume and to mute/unmute audio. How can I make these work in XFCE?
Right click a panel -> Panel submenu -> Add New Items... Add an instance of PulseAudio Plugin Right click the icon that just appeared in your panel and click "Properties". Make sure "Enable keyboard shortcuts for volume control" is enabled. You may have to install the PulseAudio Plugin first. In Debian and Debian-based distributions, the package is called xfce4-pulseaudio-plugin .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/342554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18110/" ] }
342,598
The third party scheduling application our enterprise uses doesn't execute rm commands as expected. By this, I mean I expect the rm -f $filetoremove to complete and then continue to the next line of code in the script. But I need to get it to execute preferrably rm -f . Is there another method to remove a file without using rm ?I tried > delete_foobar.file but it just empties it without removing. Additional information : My work environment is a large enterpise. I write the .sh script which I test outside the scheduling application. Outside the scheduling software, the rm -f $filetoremove command works with a return code of 0 . However, the scheduling software does not register the 0 return code and immediately exits without running the remainder of the .sh script. This is problematic and the vendor has acknowledged this defect. I'm not privy to the details of the automation software nor the exact return codes it receives. All I know, is that my scripts don't run completely, when run via the automation software, if it contains rm . This is why I'm looking for alternatives to rm . Yes, it is important that I remove the file once I've completed processing it.
The unlink command is also part of POSIX: unlink <file>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128687/" ] }
342,663
I'm wondering who starts unattended-upgrades in my debian-jessie: my man page DESCRIPTION This program can download and install security upgrades automatically and unattended, taking care to only install packages from the config‐ ured APT source, and checking for dpkg prompts about configuration file changes. All output is logged to /var/log/unattended-upgrades.log. This script is the backend for the APT::Periodic::Unattended-Upgrade option and designed to be run from cron (e.g. via /etc/cron.daily/apt). But my crontab doesn't show anything by crontab command: @stefano:/etc/cron.daily$ crontab -lno crontab for stefano# crontab -lno crontab for root But my unattended-upgrade work fine!(my unattended-upgrades log file) : 2017-02-05 12:42:42,835 INFO Initial blacklisted packages: 2017-02-05 12:42:42,866 INFO Initial whitelisted packages: 2017-02-05 12:42:42,868 INFO Starting unattended upgrades script2017-02-05 12:42:42,870 INFO Allowed origins are: ['o=Debian,n=jessie', 'o=Debian,n=jessie-updates', 'o=Debian,n=jessie-backports', 'origin=Debian,codename=jessie,label=Debian-Security']2017-02-05 12:43:15,848 INFO No packages found that can be upgraded unattended Where do I have to check/modify if I want to change my schedule?
Where do I have to check/modify if I want to change my schedule? The unattended-upgrades is configured to be applied automatically . To verify it check the /etc/apt/apt.conf.d/20auto-upgrades file , you will get : APT::Periodic::Update-Package-Lists "1";APT::Periodic::Unattended-Upgrade "1"; to modify it you should run the following command: dpkg-reconfigure -plow unattended-upgrades sample output: Applying updates on a frequent basis is an important part of keeping systems secure. By default, updates need to be applied manually using package management tools. Alternatively, you can choose to have this system automatically download and install security updates. Automatically download and install stable updates? Choose NO to stop the auto update Verify the /etc/apt/apt.conf.d/20auto-upgrades again, you should get : APT::Periodic::Update-Package-Lists "0";APT::Periodic::Unattended-Upgrade "0"; Edit To run the unattended-upgrades weekly edit your /etc/apt/apt.conf.d/20auto-upgrades as follows : APT::Periodic::Update-Package-Lists "7";APT::Periodic::Unattended-Upgrade "1"; A detailed example can be found on Debian-Wiki : automatic call via /etc/apt/apt.conf.d/02periodic APT::Periodic::Update-Package-Lists This option allows you to specify the frequency (in days) at which the package lists are refreshed. apticron users can do without this variable, since apticron already does this task.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175540/" ] }
342,704
I have 2 files: $ cat file1 jim.smith john.doe bill.johnson alex.smith $ cat file2 "1/26/2017 8:02:01 PM",Valid customer,jim.smith,NY,1485457321 "1/30/2017 11:09:36 AM",New customer,tim.jones,CO,1485770976 "1/30/2017 11:14:03 AM",New customer,john.doe,CA,1485771243 "1/30/2017 11:13:53 AM",New customer,bill.smith,CA,1485771233 I want from file2 all the names that do not exist in file1. The following does not work: $ cut -d, -f 3 file2 | sed 's/"//g' | grep -v file1 jim.smith tim.jones john.doe bill.smith Why the pipe to grep -v does not work in this case?
This is virtually the last step in my answer to your earlier question . Your solution works, if you add -f in front of file1 in the grep : $ cut -d, -f3 file2 | grep -v -f file1tim.jonesbill.smith With the -f , grep will look in file1 for the patterns. Without it, it will simply use file1 as the literal pattern. You might also want to use -F since otherwise, the dot in the pattern will be interpreted as "any character". And while you're at it, put -x in there as well to make grep perform the match across the whole line (will be useful if you have a joe.smith that shouldn't match joe.smiths ): $ cut -d, -f3 file2 | grep -v -F -x -f file1 This requires, obviously, that there are no trailing spaces at the end of the lines in file1 (which there seems to be in the text in the question). Note that the sed is not needed since the output of the cut doesn't contain any " . Also, if you had needed to remove all " , then tr -d '"' would have been a better tool.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
342,708
When installing Windows to a KVM I have gotten to the following: # virt-install \ --name vm1 \ --ram=2048 \ --vcpus=2 \ --disk path=/vm-images/vm1.img,size=15 \ --cdrom /root/dvd1.iso This will however boot Windows but I'll have no way to run through the interactive install. Is my only way to build a zero-touch type install locally then push that so Windows installs automatically? The other part that gets me lost is how will I know it's IP or if RDP is enabled so I can login remotely to the Windows Machine?
This is virtually the last step in my answer to your earlier question . Your solution works, if you add -f in front of file1 in the grep : $ cut -d, -f3 file2 | grep -v -f file1tim.jonesbill.smith With the -f , grep will look in file1 for the patterns. Without it, it will simply use file1 as the literal pattern. You might also want to use -F since otherwise, the dot in the pattern will be interpreted as "any character". And while you're at it, put -x in there as well to make grep perform the match across the whole line (will be useful if you have a joe.smith that shouldn't match joe.smiths ): $ cut -d, -f3 file2 | grep -v -F -x -f file1 This requires, obviously, that there are no trailing spaces at the end of the lines in file1 (which there seems to be in the text in the question). Note that the sed is not needed since the output of the cut doesn't contain any " . Also, if you had needed to remove all " , then tr -d '"' would have been a better tool.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342708", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47368/" ] }
342,732
After reading that Debian testing is more popular with desktop users than Debian stable, i decided to upgrade from stable to testing. I replaced all instances of "jessie" with "testing" with the command "sed -i 's/jessie/stable/g' /etc/apt/sources.list. Then, I did the upgrade with the command "sudo apt-get update && sudo apt-get upgrade". Now, when I try to install packages or upgrade, I get the following output: # apt-get upgradeReading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt-get -f install' to correct these.The following packages have unmet dependencies: console-setup : Depends: keyboard-configuration (= 1.123) but 1.156 is installed console-setup-linux : Depends: keyboard-configuration (= 1.123) but 1.156 is installed libpurple-bin : Depends: libpurple0 but it is not installed systemd : Depends: libsystemd0 (= 215-17+deb8u5) but 232-8 is installed udev : Depends: libudev1 (= 215-17+deb8u5) but 232-8 is installedE: Unmet dependencies. Try using -f. So naturally, I followed the instructions and tried using -f: # apt-get -f installReading package lists... DoneBuilding dependency tree Reading state information... DoneCorrecting dependencies... failed.The following packages have unmet dependencies: console-setup : Depends: keyboard-configuration (= 1.123) but 1.156 is installed console-setup-linux : Depends: keyboard-configuration (= 1.123) but 1.156 is installed libpurple-bin : Depends: libpurple0 but it is not installed systemd : Depends: libsystemd0 (= 215-17+deb8u5) but 232-8 is installed udev : Depends: libudev1 (= 215-17+deb8u5) but 232-8 is installedE: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.E: Unable to correct dependencies I get a similar error when trying to install individual packages. Here is what my sources.list looks like: # # deb cdrom:[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20160609-14:12]/ testing contrib main non-free# deb cdrom:[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20160609-14:12]/ testing contrib main non-freedeb http://debian.gtisc.gatech.edu/debian/ testing main deb-src http://debian.gtisc.gatech.edu/debian/ testing main deb http://security.debian.org/ testing/updates main contrib non-freedeb-src http://security.debian.org/ testing/updates main contrib non-free# testing-updates, previously known as 'volatile'deb http://debian.gtisc.gatech.edu/debian/ testing-updates main contrib non-freedeb-src http://debian.gtisc.gatech.edu/debian/ testing-updates main contrib non-free So, any suggestions on how to resolve this problem?
This is virtually the last step in my answer to your earlier question . Your solution works, if you add -f in front of file1 in the grep : $ cut -d, -f3 file2 | grep -v -f file1tim.jonesbill.smith With the -f , grep will look in file1 for the patterns. Without it, it will simply use file1 as the literal pattern. You might also want to use -F since otherwise, the dot in the pattern will be interpreted as "any character". And while you're at it, put -x in there as well to make grep perform the match across the whole line (will be useful if you have a joe.smith that shouldn't match joe.smiths ): $ cut -d, -f3 file2 | grep -v -F -x -f file1 This requires, obviously, that there are no trailing spaces at the end of the lines in file1 (which there seems to be in the text in the question). Note that the sed is not needed since the output of the cut doesn't contain any " . Also, if you had needed to remove all " , then tr -d '"' would have been a better tool.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/342732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214395/" ] }
342,735
I'm running Docker(1.9.1) on Ubuntu 16.04. When I run docker info the last line of the output says WARNING: No swap limit support . INFO[0781] GET /v1.21/info Containers: 0Images: 0Server Version: 1.9.1Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 0 Dirperm1 Supported: trueExecution Driver: native-0.2Logging Driver: json-fileKernel Version: 4.4.0-62-genericOperating System: Ubuntu 16.04.1 LTS (containerized)CPUs: 2Total Memory: 3.664 GiBName: lenovoID: A3ZV:2EVK:U5QB:O7CG:PEDL:SANK:X74X:QNLC:VOTK:GFDR:S24T:C5KTWARNING: No swap limit support What does this warning mean? I definitely have a swap partition, as evidenced by free -mh though I don't understand why my swap has no entry under available total used free shared buff/cache availableMem: 3.7G 1.9G 182M 157M 1.6G 1.3GSwap: 3.8G 2.9M 3.8G
Swap limit support allows you to limit the swap the container uses, see https://docs.docker.com/engine/admin/resource_constraints According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities : You can enable these capabilities on Ubuntu or Debian by following these instructions. Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation, even if Docker is not running. 1) Log into the Ubuntu or Debian host as a user with sudo privileges. 2) Edit the /etc/default/grub file. Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs: GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 3) Update GRUB. $ sudo update-grub
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/342735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206074/" ] }
342,736
I'd like to disable my CD/DVD drive so that it doesn't spin up every time I select Save in my Kate editor, or select a file-accessing action in other applications. The spinning up just delays what I'm doing, and I'm not even using the DVD drive. I want to leave the CD in the drive, and not have it spin up. I found a website that said a udev rule will definitely disable the drive. So far, I've tried the following 2 rules (separately), but neither of them disable the DVD drive (it still spins up - even when not mounted): ENV{ID_SERIAL}=="PIONEER_DVD-RW_DVRTD11RS_SAC1009942", ENV{UDISKS_IGNORE}="1" KERNEL=="sr0",ENV{UDISKS_IGNORE}="1", RUN+="/bin/touch /home/peter/udev-rule-ran" The RUN+ in the second instance, creates my test file "udev-rule-ran", so this tells me that my rule file is being executed, and that the rule line is being run. My Question: Could you tell me what I should be doing to definitely disable the darned DVD drive? I also want to be able to enable the drive again on the occasions that I need it. Supplementary Details: I'm trying very hard to write a udev rule to disable my CD/DVD drive. I've tried various non-udev methods to disable it but none of them work. There is no loaded module¹⁾ for the drive that I can unload, so I can't use that method to disable the drive. ¹⁾ So I think the driver must be compiled into the kernel.
Swap limit support allows you to limit the swap the container uses, see https://docs.docker.com/engine/admin/resource_constraints According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities : You can enable these capabilities on Ubuntu or Debian by following these instructions. Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation, even if Docker is not running. 1) Log into the Ubuntu or Debian host as a user with sudo privileges. 2) Edit the /etc/default/grub file. Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs: GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 3) Update GRUB. $ sudo update-grub
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/342736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214397/" ] }
342,767
I'm trying to find all the javascript files in subdirectories of my project. The working tree is as follows: .├── app├── assets├── components├── email.txt├── gulpfile.js├── node_modules└── package.json For that end I say: find . -name *.js or find . -path *.js Strangely, first command reports only gulpfile.js , while the second one reports nothing. But I have .js files in app , components and node_modules ! Why wouldn't they show up?
Put single quotes around *.js . Without quotes, the shell is expanding the wildcard to filenames that match in the current directory, so find only gets those filenames, either in the current directory or sub-directories in the find spec (but never sees the wildcard). To specify it as a pattern for find to use, you need to prevent the shell expansion. '*.js' does that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23424/" ] }
342,815
How to send a ffmpeg stream to the framebuffer /dev/fb0 ? For instance, how to send the webcam output to the framebuffer? I am looking for an equivalent of this mplayer command but using ffmpeg exclusively: mplayer -ov fbdev2 -tv driver=v4l2 device=/dev/video0 tv:// P. S.: I don't wat to pipe the output of ffmpeg to mplayer
There is a lot of misinformation on the web about this not being possible, however, it most definitely is possible. Note, you may need to adjust the -i and -pix_fmt a bit for your situation. ffmpeg -i /dev/video0 -pix_fmt bgra -f fbdev /dev/fb0 Also note, the user executing this must have privileges to write to the framebuffer (i.e. root).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/342815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
342,819
Supposing I am in the same folder as an executable file, I would need to type this to execute it: ./file I would rather not have to type / , because / is difficult for me to type. Is there an easier way to execute a file? Ideally just some simple syntax like: .file or something else but easier than having to insert the / character there. Perhaps there is some way to put something in the /bin directory, or create an alias for the interpreter, so that I could use: p file
It can be "risky" but you could just have . in your PATH. As has been said in others, this can be dangerous so always ensure . is at the end of the PATH rather than the beginning.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/342819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
342,828
I used the command mv folder_name .... I thought by using .. twice, it would move it back two folders. Unfortunately my files disappeared. I need to recover them.
Your directory is still there :) You have renamed it .... Because files whose names start with . are hidden, you cannot see the directory unless you display hidden files run ls -A and there it is! Revert the change: mv .... original_folder_name and do the move correctly mv original_folder_name ../..
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/342828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170338/" ] }
343,053
Below script is throwing error : #!/bin/bash if [[ $(wc -l "/disk1/environment.sh") -ge 0 ]];then echo "File has data" fi line 2: [[: 5 /disk1/environment.sh: division by 0 (error token is "/environment.sh") But the below code is working fine : #!/bin/bash if [[ $(wc -l "/disk1/environment.sh") > 0 ]];then echo "File has data" fi Could some one please tell me why '-ge' and '>' is behaving different here ? bash version : GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
That's because of the output from wc -l /filename , this would output e.g.: 5 /filename but you are doing integer comparison (operator: -ge ), hence the extraneous portion /filename is invalid leading to the error message. Just pass the filename to wc via STDIN so that wc just returns the count: [[ $(wc -l </disk1/environment.sh) -ge 0 ]] In the second case, [[ $(wc -l /disk1/environment.sh) > 0 ]] , you are simply checking if the output from command substitution, $(wc -l /disk1/environment.sh) , sorts after 0 , lexicographically; which will always be the case unless wc returns some error and produces nothing on STDOUT. Just to note, [[ does not support arithmetic operators like > , >= , < , <= , you need the (( keyword for these: (( $(wc -l </disk1/environment.sh) > 0 ))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/343053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150205/" ] }
343,109
I recently installed Antergos with Xfce. However, the default cursor is massive when it's over a window that isn't the OS itself. So, if I'm within the file manager everything looks normal. With Chrome/VSCode/Terminator the cursor is drastically bigger. Within my appearance settings it is set to 16 which is the lowest. Any ideas?
On Xfce4, after I had used a 4K display, my mouse cursor size was also too large. I just went to "Settings -> Mouse and Touchpad -> Theme", Cursor Size said 16 with down arrow disabled. I increased the size to 32, then back to 16 and the size of the cursor was back to the right (i.e. small) size.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/343109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204976/" ] }
343,168
How do you stop the command service <name> status from using less on its output? I have a script that automates some sysadmin actions, and after I upgraded my server to Ubuntu 16.04, it's breaking because actions that check service status are blocking because it's using something like less to show output, specifically the supervisor service. I have several daemons configured to run, and when run sudo service supervisor status , I get: * supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2017-02-06 20:35:34 EST; 12h ago Docs: http://supervisord.org Process: 18476 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Main PID: 20228 (supervisord) CGroup: /system.slice/supervisor.service |- 7387 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7388 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7389 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7390 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7391 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7392 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7393 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7394 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7395 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7396 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7397 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7398 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7678 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7679 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7680 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7681 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7682 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7683 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7684 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7685 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7693 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7694 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7698 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7702 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7703 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7705 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7707 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7709 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7710 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7712 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7713 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7717 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7720 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7723 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7724 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7728 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7730 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7731 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7733 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7734 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7735 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7738 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7743 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7747 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7748 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7750 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7752 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7756 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7758 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7761 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7763 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7764 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7772 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7781 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7785 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7794 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7799 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7801 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7805 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreloadlines 1-66 And it doesn't return until I manually scroll down or press Q to exit. How do I disable this feature?
Ubuntu is a systemd system, where the service status command actually calls systemctl status , and systemctl has a --no-pager option that does exactly what you're looking for. So you may be better off using the straight systemctl command in your script. sudo systemctl --no-pager status supervisor environment variable SYSTEMD_PAGER Another way, as pointed out by @jwodder, is to set the SYSTEMD_PAGER environment variable. This has the added benefit of also affecting the output of systemctl when called by another application like service . export SYSTEMD_PAGER=sudo service supervisor status Will allow you to achieve the same output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/343168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16477/" ] }