source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
302,683
I use a software for genetic analysis. In the Linux system it works normal. However, I am curious about the bash for windows. And when I running my analysis the following error appears: OMP: Error #100: Fatal system error detected.OMP: System error #22: Invalid argumentforrtl: error (76): Abort trap signal Please, anyone have any idea what can be and how to solve?
It is a known bug in WSL with programs linked against Intel's MKL library. Solution is to export KMP_AFFINITY=disabled before running the program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163849/" ] }
302,724
I have a group manager and an user user1 . user1 will create a directory by example in the webserver path /var/www/user1Project . How to allow the group manager to r/w in any directory owned by user1 ? I already tried to add group manager to user1 . But it did not solved my problem. A user from manager group is not allowed to write in user1Project . I do not know why.
This is quite special and you could not manage this by using the legacy permissions architecture of an unixoid system. The closest approach to your intention is using ACLs . Issue the following command (optionally as superuser): setfacl -d -R -m g:manager:rwx /dir/of/user1setfacl -R -m g:manager:rwx /dir/of/user1 The first command sets the default permissions to the directory so that they apply to newly created files (by user1). The second command sets the actual rights of the folders and files recursively. Note, that the ACL infrastructure does not apply to the Apache Webserver. Apache only cares about the legacy permissions (user/group/others permission). So inside the webfolder every file/folder must be in the www-data group and every file must have at least read permissions for www-data . Folders should have the execute permissions for www-data for the Index searching. Update: To force the newly created files inside a directory to inherit the group of this directory set the gid bit of the directory: chmod g+s /web/directory Newly created files inside /web/directory will then inherit the group of /web/directory
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184166/" ] }
302,733
I've been running a Minecraft server with a sysV init script. It is a very good script; it runs Minecraft in a "screen"; it can ensure Minecraft is never started twice; and it waits for Minecraft to shut down when stopped. It can even pass a command to Minecraft with /etc/init.d/minecraft command <command> -- this is useful for scheduled backups. Now I've upgraded to Debian Jessie, which has systemd. But for now, I keep my old-style script because it's awesome. Still, I'm actually very supportive of systemd - it really looks like a lot of improvement, simplification and centralization. I remember systemd developers promising that "old sysV scripts will just work as they did before", but turns out it's not so easy! I remember having problems with some startup scripts before; apparently, just putting a script into /etc/init.d and marking it executable is no longer enough - I had to "enable" them in order to make them work. "Well," I thought, "now it's recognized by systemd, and now I can control it via systemctl - and it should probably just use my old script to process the commands!" And turns out I was very much wrong. It doesn't start properly, it doesn't stop properly, it does not display status properly, not to mention the absence of the "command" command. I've started to look for information about how systemd is better than sysV, and what can I do to simplify and empower everything. Apparently, systemctl just makes the simplest unit file possible on its own, and hopes it will suffice! Now I'm wondering if systemd is actually incapable of handling such complex situations at all! I see that an average systemd service basically consists of some requirements and ExecStart. Like it's all systemd needs to monitor the daemon. Type in the conditions and the executable name, and systemd will handle its starting, stopping and who knows what else. But it's not that easy!! You can't just kill Minecraft's PID (not to mention it's different from the screen's PID)! I want to write more complex scripts for every action, maybe even add new actions like "command" (okay, I've already accepted that it's probably not possible at all). For "status", it has to monitor the Java process, and for stop, it has to send a command to the Minecraft console and then wait for both Java and screen to die! I also want to be sure that systemd will not just try to SIGHUP or SIGINT or SIGTERM it! So, what is the slick, modern, "intended systemd way" to do it that really allows us to utilize all the "improvements" and "simplification" systemd gives us? Surely it should be able to handle something more complex than a simple one-process daemon started in a single line and killed with a SIGINT? Should I maybe create a systemd unit and manually specify calling my old script there in every command, like this: ExecStart=/etc/init.d/minecraft startExecReload=/etc/init.d/minecraft reload(and how do I make the "stop" command and explain how to find the processes to watch for the "status" command?..) I am very pro-innovation, pro-Poettering and pro-systemd in this regard, and I believe there should be a way to do it better than it was before - maybe in an entirely different way, as it usually is with Poettering (which I like about him!). But this doesn't look like much of an improvement - more like a huge regression in need of a huge heap of kludges to even continue as it was before. "sysV scripts will continue working", my ponytail! I can't even be sure if it calls my script to stop Minecraft properly on system shutdown, or just looks at "systemctl status" and sees that it's already "inactive (dead)". Any better ideas?
After browsing through the manpages few more times (yeah, the answer is never there the first time...), I've come up with a solution... which did not work. After browsing even more, I've finally come up with the most elegant solution possible. [Unit]Description=Minecraft serverAfter=local-fs.target network.target[Service]WorkingDirectory=/home/minecraft/minecraft_serverUser=minecraftGroup=minecraftType=forking# Run it as a non-root user in a specific directoryExecStart=/usr/bin/screen -h 1024 -dmS minecraft ./minecraft_server.sh# I like to keep my commandline to launch it in a separate file# because sometimes I want to change it or launch it manually# If it's in the WorkingDirectory, then we can use a relative path# Send "stop" to the Minecraft server consoleExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff \"stop\"\015'# Wait for the PID to die - otherwise it's killed after this command finishes!ExecStop=/bin/bash -c "while ps -p $MAINPID > /dev/null; do /bin/sleep 1; done"# Note that absolute paths for all executables are required![Install]WantedBy=multi-user.target This really does look better than my original script! However, there are a few regressions. If I want to pass commands to the server console, then I'll have to make a separate script to do it. After running systemctl start minecraft or systemctl stop minecraft , be sure to check systemctl status minecraft , because those commands give no output at all, even if they actually fail. This is the only major regression compared to scripts - "always examine your output" is #1 rule in IT, but systemd doesn't seem to care about it... Also, I expected systemd to be able to manage the service shutdown without the "wait for the PID to die" workaround. In the old init script, I had to do this manually, because it was a script, and systemd is trying to eliminate the need for complex scripts that to the same things; it eliminates the need to manually script in all the timeouts and killing whoever did not die, but "waiting for the pid to die" is the next most common thing, and we still need to script it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184164/" ] }
302,781
I have two almost identical config files in two different directories ## file1.conf ##tunnel: enable: true interval: 20 public: falseloop: enable: false interval: 20 public: falselink: enable: true interval: 20 public: false## file2.conf ##tunnel: enable: true interval: 20 public: falseloop: interval: 20 enable: false public: falselink: enable: true interval: 20 public: false Now I want to change enable: false to enable: true but only for the loop section in both files. How can I do this using just one set of commands for both files?
This can be done with an inplace edit using sed -i . sed -i '/^loop:/,/^$/ { s/enable:.*$/enable: true/ }' file1.conf file2.conf The command breaks down into two main parts: /^loop:/,/^$/ { .... } This means we limit the stuff inside the {...} to the section that begins with loop: and ends with a blank line. Inside that we have s/enable:.*$/enable: true/ Which simply ensures the enable: line is set to true. The result is that we rewrite file1.conf and file2.conf so that the section beginning loop: and ending with a blank line has any enable line rewritten to enable: true
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84978/" ] }
302,795
Without initramfs/initrd support, the following kernel command line won't work: linux /bzImage root=UUID=666c2eee-193d-42db-a490-4c444342bd4e ro How can I identify my root partition via UUID without the need for an initramfs/initrd? I can't use a device name like /dev/sda1 either, because the partition resides on a USB-Stick and needs to work on different machines.
I found the answer burried in another thread : A UUID identifies a filesystems, whereas a PARTUUID identifies a partition (i.e. remains intact after reformatting). Without initramfs/initrd the kernel only supports PARTUUID. To find the PARTUUID of the block devices in your machine use sudo blkid This will print, for example /dev/sda1: UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="ext2" PARTUUID="f3f4g3f4-02" You can now modify you linux command line as follows: linux /bzImage root=PARTUUID=f3f4g3f4-02 ro This will boot from the partition with PARTUUID f3f4g3f4-02, which in this case is /dev/sda1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184225/" ] }
302,916
[root@localhost opt]# cat cfgkey = value[root@localhost opt]# grep 'key\s*=\s*.+' cfg[root@localhost opt]# My intent is: the = sign may be followed by zero or more spaces, but must be followed one or more non-space characters. Why doesn't it output the line key = value ?
Observe: $ grep 'key\s*=\s*.+' cfg$ grep 'key\s*=\s*.\+' cfgkey = value$ grep -E 'key\s*=\s*.+' cfgkey = value In Basic Regular Expressions (BRE, the default), + means a plus sign. As a GNU extension, one can signal one-or-more-of-the-previous-character using \+ . This is also true of ? , { , | , and ( . Unless escaped with a backslash, these are all treated a ordinary characters under BRE. The rules change if you use Extended Regular Expressions, -E . For ERE, the backslash isn't needed and a plain + means one-or-more-of-the-previous-character. Under ERE, \+ means a plain normal plus sign.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65170/" ] }
302,938
I know those two mechanisms (let's call them A and B) limit the resource for a process. I want to know the cooperation of those two. If A limits a resource for a process, then what happens when B also limits the same resource?
All limits apply independently. When a process makes a request that would require going over some limit, the request is denied. This holds whether the limit is for a cgroup, per process, or per user. Since cgroup sets limits per groups of processes, and setrlimit sets limits per user or per process, the mechanisms are generally not redundant. It's possible for a given request to exceed both cgroup and setrlimit limits, or only one of them. Keep in mind that all limits are maximum allowed values, not guaranteed minimums. For example, if there's a limit to 1GB of memory per process, a process with 200MB of memory may still get its request to allocate 100MB denied if there's no more available memory in the system, regardless of any applicable limits. If a setrlimit and a cgroup limit both apply, then that's at least three maximums that can be exceeded: the setrlimit maximum, the cgroup maximum, and the currently available resource maximum.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106512/" ] }
302,943
Are there any utilities to filter a sample from a stream on the command-line, e.g. print every 100th line of a file or print a line of a file out with probability 0.01 or via algorithms like reservoir sampling ? Update: So far I found: print every 100th line of a file: sed -n '0~100p'
The simple solutions with (GNU) awk: Every one in 100 (lines with number divisible by 100): do_something | awk 'NR % 100 == 0' or pseudorandomly: do_something | awk 'rand() < 0.01' The numbers will likely not be exactly uniform and it may be necessary to add BEGIN{ srand() } to initialize a new seed for each run.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376/" ] }
302,948
$ k=v p &[1] 3028 is there any way for p to change the contents of /proc/3028/environ to not mention k=v while p is still running?
On Linux, you can overwrite the value of the environment strings on the stack. So you can hide the entry by overwriting it with zeros or anything else: #include <sys/types.h>#include <unistd.h>#include <stdio.h>#include <stdlib.h>#include <string.h>int main(int argc, char* argv[], char* envp[]) { char cmd[100]; while (*envp) { if (strncmp(*envp, "k=", 2) == 0) memset(*envp, 0, strlen(*envp)); envp++; } sprintf(cmd, "cat /proc/%u/environ", getpid()); system(cmd); return 0;} Run as: $ env -i a=foo k=v b=bar ./wipe-env | hd00000000 61 3d 66 6f 6f 00 00 00 00 00 62 3d 62 61 72 00 |a=foo.....b=bar.|00000010 the k=v has been overwritten with \0\0\0 . Note that setenv("k", "", 1) to overwrite the value won't work as in that case, a new "k=" string is allocated. If you've not otherwise modified the k environment variable with setenv() / putenv() , then you should also be able to do something like this to get the address of the k=v string on the stack (well, of one of them): #include <sys/types.h>#include <unistd.h>#include <stdio.h>#include <stdlib.h>#include <string.h>int main(int argc, char* argv[]) { char cmd[100]; char *e = getenv("k"); if (e) { e -= strlen("k="); memset(e, 0, strlen(e)); } sprintf(cmd, "cat /proc/%u/environ", getpid()); system(cmd); return 0;} Note however that it removes only one of the k=v entries received in the environment. Usually, there is only one, but nothing is stopping anyone from passing both k=v1 and k=v2 (or k=v twice) in the env list passed to execve() . That has been the cause of security vulnerabilities in the past such as CVE-2016-2381 . It could genuinely happen with bash prior to shellshock when exporting both a variable and function by the same name. In any case, there will always be a small window during which the env var string has not been overridden yet, so you may want to find another way to pass the secret information to the command (like a pipe for instance) if exposing it via /proc/pid/environ is a concern. Also note that contrary to /proc/pid/cmdline , /proc/pid/environment is only accessible by processes with the same euid or root (or root only if the euid and ruid of the process are not the same it would seem). You can hide that value from them in /proc/pid/environ , but they may still be able to get any other copy you've made of the string in memory, for instance by attaching a debugger to it. See https://www.kernel.org/doc/Documentation/security/Yama.txt for ways to prevent at least non-root users from doing that.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23522/" ] }
303,143
From the man pages: ln - make links between files and link - call the link function to create a link to a file These seem to do the same thing however ln takes a lot of options as well. Is link just a very basic ln ? Is there any reason to use link over ln?
link used solely for hard links, calls the link() system function and doesn't perform error checking when attempting to create the link ln has error checking and can create hard and soft links
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102238/" ] }
303,157
I was testing the speed of Bash and Python by running a loop 1 billion times. $ cat python.py#!/bin/python# python v3.5i=0;while i<=1000000000: i=i+1; Bash code: $ cat bash2.sh#!/bin/bash# bash v4.3i=0while [[ $i -le 1000000000 ]]dolet i++done Using the time command I found out that the Python code takes just 48 seconds to finish while the Bash code took over 1 hour before I killed the script. Why is this so? I expected that Bash would be faster. Is there something wrong with my script or is Bash really much slower with this script?
Shell loops are slow and bash's are the slowest.Shells aren't meant to do heavy work in loops. Shells are meant to launch a few external, optimized processes on batches of data. Anyway, I was curious how shell loops compare so I made a little benchmark: #!/bin/bashexport IT=$((10**6))echo POSIX:for sh in dash bash ksh zsh; do TIMEFORMAT="%RR %UU %SS $sh" time $sh -c 'i=0; while [ "$IT" -gt "$i" ]; do i=$((i+1)); done'doneecho C-LIKE:for sh in bash ksh zsh; do TIMEFORMAT="%RR %UU %SS $sh" time $sh -c 'for ((i=0;i<IT;i++)); do :; done'doneG=$((10**9))TIMEFORMAT="%RR %UU %SS 1000*C"echo 'int main(){ int i,sum; for(i=0;i<IT;i++) sum+=i; printf("%d\n", sum); return 0; }' | gcc -include stdio.h -O3 -x c -DIT=$G - time ./a.out ( Details: CPU: Intel(R) Core(TM) i5 CPU M 430 @ 2.27GHz ksh: version sh (AT&T Research) 93u+ 2012-08-01 bash: GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) zsh: zsh 5.2 (x86_64-unknown-linux-gnu) dash: 0.5.7-4ubuntu1 ) The (abbreviated) results (time per iteration) are: POSIX:5.8 µs dash8.5 µs ksh14.6 µs zsh22.6 µs bashC-LIKE:2.7 µs ksh5.8 µs zsh11.7 µs bashC:0.4 ns C From the results: If you want a slightly faster shell loop, then if you have the [[ syntax and you want a fast shell loop, you're in an advanced shell and you have the C-like for loop too. Use the C like for loop, then. They can be about 2 times as fast as while [ -loops in the same shell. ksh has the fastest for ( loop at about 2.7µs per iteration dash has the fastest while [ loop at about 5.8µs per iteration C for loops can be 3-4 decimal orders of magnitude faster. (I heard the Torvalds love C). The optimized C for loop is 56500 times faster than bash's while [ loop (the slowest shell loop) and 6750 times faster than ksh's for ( loop (the fastest shell loop). Again, the slowness of shells shouldn't matter much though, because the typical pattern with shells is to offload to a few processes of external, optimized programs. With this pattern, shells often make it much easier to write scripts with performance superior to python scripts (last time I checked, creating process pipelines in python was rather clumsy). Another thing to consider is startup time. time python3 -c ' ' takes 30 to 40 ms on my PC whereas shells take around 3ms. If you launch a lot of scripts, this quickly adds up and you can do very very much in the extra 27-37 ms that python takes just to start. Small scripts can be finished several times over in that time frame. (NodeJs is probably the worst scripting runtime in this department as it takes about 100ms just to start (even though once it has started, you'd be hard pressed to find a better performer among scripting languages)).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
303,190
I run the following command in a freshly started bash (without any prior child processes) $ prlimit --pid $BASHPID --nproc=20: which gives me an avalanche of bash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: Resource temporarily unavailable I wonder is 20 to low a number, or is the procedure wrong?What is going on here? I know that bash has an internal builtin ulimit call, yet why would not prlimit work update# also bash builtin ulimit yields the same result $ ulimit -Su 20bash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: retry: No child processesbash: fork: Resource temporarily unavailablebash: wait_for: No record of process 7527bash: fork: retry: No child processesbash: fork: retry: No child processes An answer to this question might be related to how the ulimit RLIMIT_NPROC is to be understood. It seems to be both UID but also PID bound? Maybe somebody would also be able to give insight to what distinguishes the two different error messages No child processes and Resource temporarily unavailable
You've misunderstood how the process limit works. prlimit --nproc a.k.a. RLIMIT_NPROC a.k.a. ulimit -u is the maximum number of processes¹ for the user as a whole. If you already have 20 processes running, and you set the limit to 20, you can't create any new process. What matters is how many processes are running as your user, it doesn't matter who their parent is or what their limit setting is. The subtlety is that while the limit is global, it only applies to the processes where it's set. So if you have 18 processes running and you run prlimit --nproc 20 bash in one terminal and prlimit --nproc 30 bash in another terminal, the first bash is unable to create any subprocesses, but the second one can create some more, until there's a total of 30 (whether they've been started from that bash or not). If you set the limit to 1 for a process, then that process can't fork at all, but other processes can still fork. If you set the limit to a number in your login scripts, then that number applies to your processes (except ones started without reading your login scripts, e.g. from a cron job). Other cases can get confusing. It's easiest to describe this from an implementation point of view. The limit is only read when executing the fork system call¹. When a process calls fork , the kernel counts the number R of processes are running as the same user. If the calling process's NPROC limit is less or equal to R then the call is rejected with the error EAGAIN (“resource temporarily unavailable”, i.e. “try again”). In your case, you probably already have at least 20 processes running, and your .bashrc runs several subprocesses (or rather, fails to run them because the process limit has been reached). You see two different messages because bash tries to fork several times when the error is EAGAIN . The first times it displays a message to say that it's retrying, and finally it displays a message to say that it's given up. ¹ More precisely, kernel threads. ² This system call is called clone on Linux.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
303,208
I wanted to know if by default, does anything you enter into the terminal get logged somewhere in your system. I can imagine that to a certain degree it does because you can use the up and down arrows to recall commands entered, but acts similar to how RAM works in a computer, where once you close the terminal the commands are lost/removed? Or is there someone where within the computer that they are still being kept?
short: no long: It depends: most shells have a history mechanism which records command-lines. shell-history usually is configurable (and may not be activated). But things like text-editors generally do not (your keystrokes are not logged—usually) Also, even with shells, it is not common to be able to have multiple instances of shells running and record all of the commands from these instance. Besides shell-history, there are other ways to record your commands, e.g., using low-level auditing programs (which record the resources which your commands use), or text-only things like script (which can record all of the information sent from the computer to the terminal). Even if your shell is not configured to record commands, you may work in an environment where auditing is configured. For those "by default", there is a record. Further reading: How to disable Bash shell commands history on Linux How can you log every command typed How to track/log commands executed on a shell?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184540/" ] }
303,282
I'm using a fresh install of ubuntu 16.04, with the bash shell. There are two things that I want to do: Set up vi mode so I can have vim-like movements from the terminal exit insert mode by typing jk I read in another post how this can be done with zsh , how can I do it with bash ? tl;dr put bind '"jk":vi-movement-mode' into your .bashrc file after set -o vi :) server@thinkpad:~$ tail -n 2 .bashrcset -o vibind '"jk":vi-movement-mode' please see @grochmal's answer for a more detailed explanation
TL;DR Bash has a similar functionality to zsh 's bindkey through bind , but it does not have several vi modes like zsh . After set -o vi you can do: bind '"jk":vi-movement-mode' which is the equivalent of zsh 's bindkey -M <all vi modes> jk vi-movement-mode The vi-movement-mode functions comes from inputrc (see /etc/inputrc for a list of them). Full text As Stephen Harris points out in his comment: .bashrc is called by bash always (and notably not by other shells). .bash_profile is only called on login shells (and again, bash only). Several distros come with a .bash_profile skeleton that looks as follows: # ~/.bash_profile[[ -f ~/.bashrc ]] && . ~/.bashrc Which is a good content for .bash_profile since you can simply forget it exists. Now, to map j k to Esc in the shell session, that is not really possible. When you do: inoremap jk <esc> In Vim, after you type j , Vim knows it needs to wait a little bit to see if you type k next and it should invoke the mapping (or that you type another key and the mapping should not be triggered). As an addendum this is controlled by :set timeoutlen=<miliseconds> in Vim (see :h timeoutlen ). Several shell's or X11 has no such timeout control and does not allow for multiple character mappings. Only a mapping of a single key is allowed (But see the support notes below.) . set -o vi Does not read .vimrc , it only imitates some vi (not even vim ) key combinations that can be used in the shell. The same can be said about -o emacs , it does not come with the full power of emacs . zsh support zsh actually supports map timeout. And you can use the following to map jk to <esc> : bindkey -v # instead of set -o vibindkey -e jk \\e (That will need to go to ~/.zshrc not ~/.bashrc ) Yet, I advise against this. I use vim and zsh most of the time. I have inoremap jk <esc> in my vimrc and I did try using the bindkey combination above. zsh waits too long to print j when using it, and that annoyed me a lot. bash support bash supports readline bind . I believe that bash can be compiled without readilne therefore there may be some rare systems that have bash that do not support bind (be watchful). To map jk to <esc> in bash you need to do: set -o vibind '"jk":"\e"' (yes that's a double level of quoting, it is needed) Again, this makes typing j quite annoying. But somehow less annoying than the zsh solution on my machine (probably the default timeout is shorter). Workaround (for non-bash and non-zsh shells) The reason for remapping the Esc key is that it lies quite far away on the keyboard, and typing it takes time. A trick that can be borrowed from the emacs guys is to remap CapsLock since it is a useless key anyway. emacs guys remap it to Ctrl but we will remap it to Esc . Let's use xev -event keyboard to check the keycode of CapsLock : KeyPress event, serial 25, synthetic NO, window 0x1c00001, root 0x496, subw 0x0, time 8609026, (764,557), root:(765,576), state 0x0, keycode 66 (keysym 0xffe5, Caps_Lock), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False And to check the function of Esc : KeyPress event, serial 25, synthetic NO, window 0x1c00001, root 0x496, subw 0x0, time 9488531, (571,525), root:(572,544), state 0x0, keycode 9 (keysym 0xff1b, Escape), same_screen YES, XLookupString gives 1 bytes: (1b) " XmbLookupString gives 1 bytes: (1b) " XFilterEvent returns: False Very good, CapsLock is keycode 66 and Esc 's function is called "Escape". Now we can do: # diable caps lockxmodmap -e "remove lock = Caps_Lock"# make an Esc key from the keycode 66xmodmap -e "keycode 66 = Escape" The above must be done in this order. Now every time you hit CapsLock it works like an Esc key. The tricky part is where to set this. A file ~/.Xmodmap with the content: remove lock = Caps_Lockkeycode 66 = Escape Should be respected by most distros (actually display managers, but I'm saying distros for simplicity), but I saw ones that don't respect several ~/X* files. For such distros you may try something like: if [ "x" != "x$DISPLAY" ]; then xmodmap -e "remove lock = Caps_Lock" xmodmap -e "keycode 66 = Escape"fi In your .bashrc . (In theory that would be better placed in ~/.xinitrc but if a display manager does not respect .Xmodmap it will definitely not respect ~/.xnintrc .) Extra note: This only remaps CapsLock to Esc in a X11 session, therefore the map will only work in terminal emulators. Actual tty 's will not see the map. References and extra reading: Disabling CapsLock Very detailed answer on how key remapping in X11 works .bashrc vs. .bash_profile
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173557/" ] }
303,306
Say I have the following setup : $ cat fileAtextA$ cat fileBtextB$ ln fileA myLink$ cat myLink # as expectedtextA I do not understand the following behaviour : $ cp fileB fileA$ cat myLink # expected ?textB I would have expected this outcome if I had written ln -s fileA myLink instead, but not here. I would have expected cp in overwriting mode to do the following : Copy the content of fileB somewhere on the hard drive Link fileA to that hard drive address but instead, I infer it does the following : Follow the link fileA Copy the content of fileB at that address The same does not seem to go for mv , with whick it works as I expected. My questions : Is this explained somewhere that I have missed in man cp or man mv or man ln ? Is this behaviour just a coincidence, (say if fileB is not much greater in size than fileA ) , or can it be reliably used as a feature ? Does this not defeat the idea of hard links ? Is there some way to modify the line cp fileB fileA so that the next cat myLink still shows textA ?
There is no "following the link" with hardlinks - creating a hardlinks simply gives several different names to the same file (at low level, files are actually integer numbers - "inodes", and they have names just for user convenience)- there is no "original" and "copy" - they are the same. So it is completly the same which of the hardlinks you open and write to, they are all the same. So cp by defaults opens one the files and writes to it, thus changing the file (and hence all the names it has). So yes, it is expected. Now, if you (instead of rewriting) first removed one of the names (thus reducing link count) and then recreated new file with the same name as you had, you would end up with two different files. That is what cp --remove-destination would do. 1 basics are documented at link(2) pointed to by ln(1) 2 yes it is normal behaviour and not a fluke. But see above remark about cp --remove-destination 3 no, not really. Hardlinks are simply several names for same file. What you seem to want are COW (copy-on-write) links, which only exist is special filesystems 4 yes, cp --remove-destination fileB fileA
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184607/" ] }
303,392
I'm digging through different sources, but can't find a good description of the anatomy of child reaping. This is a simple case of what I would like to understand. $ cat <( sleep 100 & wait ) &[1] 14247$ ps ax -O pgid | grep $$12126 12126 S pts/17 00:00:00 bash14248 12126 S pts/17 00:00:00 bash14249 12126 S pts/17 00:00:00 sleep 10014251 14250 S pts/17 00:00:00 grep --color=auto 12126$ kill -2 14248$ ps ax -O pgid | grep $$12126 12126 S pts/17 00:00:00 bash14248 12126 Z pts/17 00:00:00 [bash] <defunct>14249 12126 S pts/17 00:00:00 sleep 10014255 14254 S pts/17 00:00:00 grep --color=auto 12126 Why is the zombie waiting for the kid? Can you explain this one? Do I need to know C and read Bash source code to get a wider understanding of this or is there any documentation? I've already consulted: various links on this site and Stack Overflow The Linux Command Line by W. Shotts man bash Bash Reference Manual (in Bash source code docs) Bash Guide for Beginners @ tldp.org Advanced Bash-Scripting Guide GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu) Linux 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
The zombie isn't waiting for its child. Like any zombie process, it stays around until its parent collects it. You should display all the processes involved to understand what's going on, and look at the PPID as well. Use this command line: ps -t $(tty) -O ppid,pgid The parent of the process you're killing is cat . What happens is that bash runs the background command cat <( sleep 100 & wait ) in a subshell. Since the only thing this subshell does is to set up some redirection and then run an external command, this subshell is replaced by the external command. Here's the rundown: The original bash (12126) calls fork to execute the background command cat <( sleep 100 & wait ) in a child (14247). The child (14247) calls pipe to create a pipe, then fork to create a child to run the process substitution sleep 100 & wait . The grandchild (14248) calls fork to run sleep 100 in the background. Since the grandchild isn't interactive, the background process doesn't run in a separate process group. Then the grandchild waits for sleep to exit. The child (14247) calls setpgid (it's a background job in an interactive shell so it gets its own process group), then execve to run cat . (I'm a bit surprised that the process substitution isn't happening in the background process group.) You kill the grandchild (14248). Its parent is running cat , which knows nothing about any child process and has no business calling wait . Since the grandchild's parent doesn't reap it, the grandchild stays behind as a zombie. Eventually, cat exits — either because you kill it, or because sleep returns and closes the pipe so cat sees the end of its input. At that point, the zombie's parent dies, so the zombie is collected by init and init reaps it. If you change the command to { cat <( sleep 100 & wait ); echo done; } & then cat runs in a separate process, not in the child of the original bash process: the first child has to stay behind to run echo done . In this case, if you kill the grandchild, it doesn't stay on as a zombie, because the child (which is still running bash at that point) reaps it. See also How does linux handles zombie process and Can a zombie have orphans? Will the orphan children be disturbed by reaping the zombie?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
303,406
I know you can boot into single user mode by editing the kernel boot options at the grub prompt (by pressing "e" to edit) if you add the word "single," but how can you boot into non-graphical mode, what used to be called init 3 ? On Ubuntu, you can add the word "text" but that doesn't seem to work on CentOS 7.
CentOS 7 uses systemd, and so uses targets . If you permanently want a text-only mode (eg a server where you don't care about graphics) then you can tell systemd about this: systemctl set-default multi-user.target Now on the next reboot you'll get a text console. This is the same as the older id:3:initdefault: in /etc/inittab to set the default run level. If you want a one-off reboot from grub (eg because of a bad video driver you're trying to fix) then the option to add to the kernel line is systemd.unit=multi-user.target
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23183/" ] }
303,420
First of all, I am not interested in actually playing the games while running Linux. Read on :) I dual boot Ubuntu and Windows 10. I spend the majority of my time in Linux but keep Windows around only for gaming. And since I live in a rural area with very slow internet speeds, I would like to have Steam download my Windows games while I'm using linux. It would be such a waste of time to boot up Windows just to keep it there for 14 hours, downloading a game while I can't do any of my normal computing tasks. I know this might be possible by installing the Windows Steam client and running with Wine (and then copying the files to my Windows partition, but wondered if there might be a more elegant solution? Has this been done before and does anyone have any suggestions?
CentOS 7 uses systemd, and so uses targets . If you permanently want a text-only mode (eg a server where you don't care about graphics) then you can tell systemd about this: systemctl set-default multi-user.target Now on the next reboot you'll get a text console. This is the same as the older id:3:initdefault: in /etc/inittab to set the default run level. If you want a one-off reboot from grub (eg because of a bad video driver you're trying to fix) then the option to add to the kernel line is systemd.unit=multi-user.target
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184671/" ] }
303,423
An answer to Linux: allowing an user to listen to a port below 1024 specified giving an executable additional permissions using setcap such that the program could bind to ports <1024: setcap 'cap_net_bind_service=+ep' /path/to/program What is the correct way to undo these permissions?
To remove capabilities from a file use the -r flag setcap -r /path/to/program This will result in the program having no capabilities.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167427/" ] }
303,453
I ran below command: tar cvf ~/data.tar /home/user/data/ It showed lot of filenames on the screen, after 10 minutes I see: tar: Exiting with failure status due to previous errors Where can I see the error log for tar (is it somewhere under /var/log ). I can see that there is 100 Mb difference between the tarred version and the original. I want to know the names of missing files from newly created archive. du -sh data.tar /home/user/data5.7G data.tar5.8G /home/user/data Using GNU tar 1.27.1
tar is an interactive command, not a service and does not produce log files. Its output is sent to stdout and errors to stderr . You can redirect the output streams to files, but you would need to run it again. tar cvf ~/data.tar /home/user/data/ > tar_stdout.log 2> tar_stderr.log And examine the contents of the file tar_stderr.kog . Or combine both streams into one file: tar cvf ~/data.tar /home/user/data/ &> tar.log
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
303,482
edit: I would like to match column 1,2 of file1.txt with column of 1,3 of file2.txt and print the matching lines of file2.txt file1.txt: scaffold1 57482scaffold1 63114scaffold1 63118scaffold1 63129scaffold1 63139scaffold1 63279scaffold1 63294scaffold2 65015scaffold2 77268scaffold2 77335 file2.txt: scaffold1 381 382 T/A +scaffold1 384 385 T/A,G +scaffold1 385 386 G/C +scaffold1 445 446 C/T +scaffold1 57481 57482 T/A +scaffold1 63113 63114 T/A,G +scaffold1 63128 63129 G/C +scaffold2 65014 65015 G/A +scaffold2 77267 77268 G/A +scaffold2 77334 77335 C/T + output.txt: scaffold1 57481 57482 T/A +scaffold1 63113 63114 T/A,G +scaffold1 63128 63129 G/C +scaffold2 65014 65015 G/A +scaffold2 77267 77268 G/A +scaffold2 77334 77335 C/T +
An awk solution: $ awk 'NR==FNR{a[$1$2]++;next}{if($1$3 in a){print}}' file1 file2 scaffold1 57481 57482 T/A +scaffold1 63113 63114 T/A,G +scaffold1 63128 63129 G/C +scaffold2 65014 65015 G/A +scaffold2 77267 77268 G/A +scaffold2 77334 77335 C/T + NR is the current line number and FNR the current line number of the current file. The two will be equal only while the first file is being read. So the first block will only be executed while the 1st file is being read and therefore the 1st and 2nd fields of the first file are saved in the array a . Then, when the second file is being processed, we print its lines only if the 1st and 3rd field is present in a , so only if it were present in the first file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135126/" ] }
303,493
I would like to construct a short function to do the following. Let's say that I move file 'file.tex' to my documents directory: mv file.tex ~/Documents Then, I'd like to cd to that directory: cd ~/Documents I'd like to generalize this to any directory, so that I can do this: mv file.tex ~/Documentsfollow and have the follow command read the destination from the previous command, then execute accordingly. For a simple directory, this doesn't save much time, but when working with nested directories, it would be tremendous to be able to just use mv file.tex ~/Documents/folder1/subfolder1follow I thought it would be relatively simple, and that I could do something like this: follow(){ place=`history 2 | sed -n '1p;1q' | rev | cut -d ' ' -f1 | rev` cd $place} but this doesn't seem to work. If I echo $place , I do get the desired string (I'm testing it with ~/Documents ), but the last command returns No such file or directory The directory certainly exists. I'm at a loss. Could you help me out?
Instead of defining a function, you can use the variable $_ , which is expanded to the last argument of the previous command by bash . So use: cd "$_" after mv command. You can use history expansion too: cd !:$ If you must use a function: follow () { cd "$_" ;} $ follow () { cd "$_" ;}$ mv foo.sh 'foo bar'$ follow foo bar$ N.B: This answer is targeted to the exact command line arguments format you have used as we are dealing with positional parameters. For other formats e.g. mv -t foo bar.txt , you need to incorporate specific checkings beforehand, a wrapper would be appropriate then.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155638/" ] }
303,497
So there's a user rtorrent:torrent which is in group media-agent . Here is the command I'm trying to get working: rtorrent@seedbox:/shared/storage$ cd booksbash: cd: books: Permission denied Even though the folder has permission 664: rtorrent@seedbox:/shared/storage$ ls -altotal 44drwxrwxrwx 11 media-agent media-agent 4096 Aug 15 15:05 .drwxrwxrwx 8 root root 4096 Aug 15 01:12 ..drw-rw-r-- 3 media-agent media-agent 4096 Aug 15 15:15 booksdrw-rw-r-- 2 media-agent media-agent 4096 Aug 15 15:03 cartoonsdrw-rw-r-- 4 media-agent media-agent 4096 Aug 15 01:10 gamesdrw-rw-r-- 3 media-agent media-agent 4096 Aug 15 00:47 librariesdrw-rw-r-- 5 media-agent media-agent 4096 Aug 12 16:54 media-centerdrw-rw-r-- 2 media-agent media-agent 4096 Aug 6 15:31 otherdrw-rw-r-- 8 media-agent media-agent 4096 Aug 15 01:10 personnaldrw-rw-r-- 5 media-agent media-agent 4096 Aug 15 01:10 softwaredrw-rw-r-- 2 media-agent media-agent 4096 Aug 6 18:38 sync Here is the group setup: rtorrent@seedbox:/shared/storage$ getent group | grep 'media-agent'torrent:x:1005:vinz243,www-data,ftpuser,media-agent,nodejs,rtorrentmedia-agent:x:1007:vinz243,plex,deluge,rtorrent,root,nodejsnodejs:x:1008:media-agent
To be able to cd into a directory you need x permissions. Your books directory doesn't have that: drw-rw-r-- 3 media-agent media-agent 4096 Aug 15 15:15 books Indeed none of those directories have x permissions. You can recursively fix permissions with something like find . -type d -exec chmod a+x {} \; You may need to be the media-agent user (or root ) to do that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70159/" ] }
303,549
Suppose there is file foo bar cat dogfoo foo cat barbar foo foo foo How do we grep for lines with a certain number of occurrences of foo e.g. if the number is 1 only the first line in the sample file should be printed ?
$ grep 'foo' file | grep -v 'foo.*foo' First pick out all lines containing foo , then remove all lines with foo followed by another foo somewhere on the line. If all lines contain at least one foo (as in your example), you may skip the first grep . For a general solution to "How do I grep for exactly N occurrences of a string?": grep for lines with at least N matches, then remove lines with N+1 matches (or more).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184788/" ] }
303,605
The full portion of the Bash man page which is applicable only says: If the operating system on which bash is running supports job control, bash contains facilities to use it. Typing the suspend character (typically ^Z, Control-Z) while a process is running causes that process to be stopped and returns control to bash. Typing the delayed suspend character (typically ^Y, Control-Y) causes the process to be stopped when it attempts to read input from the terminal, and control to be returned to bash. The user may then manipulate the state of this job, using the bg command to continue it in the background, the fg command to continue it in the foreground, or the kill command to kill it. A ^Z takes effect immediately, and has the additional side effect of causing pending output and typeahead to be discarded. I have never used Ctrl - Y ; I only just learned about it. I have done fine with Ctrl - Z (suspend) only. I am trying to imagine what this option is for . When would it be useful? (Note that this feature doesn't exist on all Unix variants. It's present on Solaris and OpenBSD but not on Linux or FreeBSD. The corresponding setting is stty dsusp .) Perhaps less subjectively: Is there anything that can be accomplished with Ctrl - Y that cannot be accomplished just as easily with Ctrl - Z ?
From the 4BSD manual for csh : A ^Z takes effect immediately and is like an interrupt in that pending output and unread input are discarded when it is typed. There is another special key ^Y which does not generate a STOP signal until a program attempts to read (2) it. This can usefully be typed ahead when you have prepared some commands for a job which you wish to stop after it has read them. So, the purpose is to type multiple inputs while the first one is being processed, and have the job stop after they are done.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
303,614
I have folders and files looks likes this: /thumbs/6b0/6b029aab9ca00329cd28fd792ecf90a.jpg/thumbs/6b0/6b029aab9ca00329cd28fd792ecf90a-s.jpg/thumbs/d11/d11e15a72e20e14c45bd2769d763126d.jpg/thumbs/d11/d11e15a72e20e14c45bd2769d763126d-s.jpg And I want to apply following command to the files not have -s in their names in all sub directories in thumbs folder. mogrify -resize 50% -quality 85 -strip filename.jpg I have look around find and grep but couldn't figure out how can I do this. Any help appreciated.
From the 4BSD manual for csh : A ^Z takes effect immediately and is like an interrupt in that pending output and unread input are discarded when it is typed. There is another special key ^Y which does not generate a STOP signal until a program attempts to read (2) it. This can usefully be typed ahead when you have prepared some commands for a job which you wish to stop after it has read them. So, the purpose is to type multiple inputs while the first one is being processed, and have the job stop after they are done.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184846/" ] }
303,631
I want a hybrid mode where I can use all the default keymap emacs key bindings by default, and still have the ability to change to vi-command mode. How do I set this up?
I read @Tom Hale 's answers here and here . I think instead of moving vi-insert bindings to emacs , the better approach is to move emacs bindings to vi-insert . The reason is because vi-command has bindings that switch to vi-insert and it's hard to emulate the functionality to make it switch to emacs mode. For example, the A command in vi-command defaults to vi-append-eol (appending at the end of the line and switch to vi-insert ). You can't make A switch to emacs mode because it is bound to a function and not a macro. For example, this wouldn't work "A": vi-append-eol emacs-editing-mode Nor this, using @Tom Hale 's answer "A": vi-append-eol "\ee" You could do this: "A": "$a\ee" But now that depends on the "a" , vi-append-mode command, which also needs to be rebound. "a" could be move forward then "i" . "i" could just switch to emacs . There's a whole chain of commands to translate into macros which is a pain to do. So you are better off moving the emacs bindings to vi-insert . So we want to set vi-insert bindings that are unique to emacs And we want to make a decision on which binding to use if they have different bindings for the same key sequence.If they have the exact same binding, we ignore them. This can be done with this command comm -3\ <(INPUTRC=/dev/null bash -c 'bind -pm emacs' | LC_ALL='C' grep -vE '^#|: (do-lowercase-version|self-insert)$' | sort) \ <(INPUTRC=/dev/null bash -c 'bind -pm vi-insert' | LC_ALL='C' grep -vE '^#|: (do-lowercase-version|self-insert)$' | sort) | cat The reason why the | cat is there is explained here The -3 throws away the "exact same bindings"So you go through this list and look for bindings on the left column. For each binding on the left column: If there is a duplicate binding for the same key sequence, such as "\C-d": delete-char "\C-d": vi-eof-maybe Choose one of them. If you want the vi-insert one (on the right), you can delete both lines, because we will be adding these bindings to vi-insert which already has the vi-insert binding. If you want the emacs one (on the left), delete the vi-insert one. If there is a unique binding on the right column ( vi-insert ), such as "\e": vi-movement-mode delete it because `vi-insert already has it. The rest of the bindings will be on the left column ( emacs ). Leave these alone because we will add these to vi-insert . Here is my .inputrc with the emacs bindings I selected added to vi-insert . I decided not to use the "kj" to switch to vi-command in @Tom Hale 's answer because it can be done with "\ee" which takes you from emacs to vi-insert then another "\e" which takes you from vi-insert to vi-command . There are actually words other than blackjack containing kj and words with jk (mostly place names) I kept "\C-d": delete-char and threw away "\C-d": vi-eof-maybe . Because I could just use Enter for vi-eof-maybe and I don't want to accidentally quit readline by pressing "\C-d" . This means to delete the "\C-d": vi-eof-maybe binding because we are overriding the vi-eof-maybe binding in vi-insert mode with the delete-char binding. I kept "\C-n": menu-complete instead of "\C-n": next-history because I could just use down arrow for next-history . This means to delete both bindings because vi-insert already has the menu-complete binding. I kept "\C-p": menu-complete-backward instead of "\C-p": previous-history because I could press up arrow for previous-history . This means to delete both bindings because vi-insert already has the menu-complete-backward binding. I kept "\C-w": vi-unix-word-rubout instead of "\C-w": unix-word-rubout . I don't know what's the difference. I just stuck with the vi-insert one. This means to delete both bindings because vi-insert already has the vi-unix-word-rubout binding. I kept "\e": vi-movement-mode . This means to delete this binding because vi-insert already has the vi-movement-mode binding. set editing-mode viset keymap emacs"\ee": vi-editing-modeset keymap vi-command"\ee": emacs-editing-mode# key bindings to get out of vi-editing-modeset keymap vi-insert"\ee": emacs-editing-mode# emacs keybindings in vi-insert mode"\C-@": set-mark"\C-]": character-search"\C-_": undo"\C-a": beginning-of-line"\C-b": backward-char"\C-d": delete-char"\C-e": end-of-line"\C-f": forward-char"\C-g": abort"\C-k": kill-line"\C-l": clear-screen"\C-o": operate-and-get-next"\C-q": quoted-insert"\C-x!": possible-command-completions"\C-x$": possible-variable-completions"\C-x(": start-kbd-macro"\C-x)": end-kbd-macro"\C-x*": glob-expand-word"\C-x/": possible-filename-completions"\C-x@": possible-hostname-completions"\C-x\C-?": backward-kill-line"\C-x\C-e": edit-and-execute-command"\C-x\C-g": abort"\C-x\C-r": re-read-init-file"\C-x\C-u": undo"\C-x\C-v": display-shell-version"\C-x\C-x": exchange-point-and-mark"\C-xe": call-last-kbd-macro"\C-xg": glob-list-expansions"\C-x~": possible-username-completions"\e ": set-mark"\e!": complete-command"\e#": insert-comment"\e$": complete-variable"\e&": tilde-expand"\e*": insert-completions"\e-": digit-argument"\e.": insert-last-argument"\e.": yank-last-arg"\e/": complete-filename"\e0": digit-argument"\e1": digit-argument"\e2": digit-argument"\e3": digit-argument"\e4": digit-argument"\e5": digit-argument"\e6": digit-argument"\e7": digit-argument"\e8": digit-argument"\e9": digit-argument"\e<": beginning-of-history"\e=": possible-completions"\e>": end-of-history"\e?": possible-completions"\e@": complete-hostname"\e\C-?": backward-kill-word"\e\C-]": character-search-backward"\e\C-e": shell-expand-line"\e\C-g": abort"\e\C-h": backward-kill-word"\e\C-i": dynamic-complete-history"\e\C-r": revert-line"\e\C-y": yank-nth-arg"\e\\": delete-horizontal-space"\e\e": complete"\e^": history-expand-line"\e_": insert-last-argument"\e_": yank-last-arg"\eb": backward-word"\ec": capitalize-word"\ed": kill-word"\ef": forward-word"\eg": glob-complete-word"\el": downcase-word"\en": non-incremental-forward-search-history"\ep": non-incremental-reverse-search-history"\er": revert-line"\et": transpose-words"\eu": upcase-word"\ey": yank-pop"\e{": complete-into-braces"\e~": complete-username UPDATE I think I made it a little better. For a while, I turned back on the "jk" mooshing to switch to vi command because pressing "\e" to switch to vi-command had a delay since a lot of emacs commands moved over to vi-insert uses "\e" as a leader. This shows the "jk" mooshing commented out. I currently use "\ee" to cycle modes. I didn't unbind the "\e" to switch from vi-insert to vi-command because I don't see a need. So it has the effect where if you press "\e" in vi-insert and wait, you will go to vi-command . To get from vi-command to vi-insert , you'll just press one of the commands such as "A" or "i" so allowing cycling between 3 modes won't hurt because you can also just cycle between the 2 vi modes. set keymap emacs"\ee": vi-editing-modeset keymap vi-command"\ee": emacs-editing-mode# key bindings to get out of vi-editing-modeset keymap vi-insert# Choose one of these editor switching modes# # moosh jk to switch#"\ee": emacs-editing-mode#"\ejk": vi-movement-mode#"\ekj": vi-movement-mode## "\ee" to cycle# can unmap "\e" to switch to vi-command but don't see a need#"\e":"\ee": vi-movement-mode UPDATE In vi-insert , "\C-w" is bound to `vi-unix-word-rubout, which stops at word boundaries or something. I didn't like that functionality. For example, try this $ cannot-delete'# press left arrow to go back behind the single quote# press \C-w in vi-insert to try to delete cannot-delete, it won't work this bug report describes the problem, although I don't have a problem with the example provided. So you can bind "\C-w" to the emacs unix-word-rubout to fix this. To rebind "\C-w" , you might need to unbind defaults . # In .bashrcstty werase undef If you would like to unbind all defaults: # In .inputrcset bind-tty-special-chars off Not sure if it matters, but I am on macOS so there are other default bindings I remove . Then in your .inputrc : "\C-w": unix-word-rubout
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
303,644
I have to replace a large block of a text (shell script code) in a file with another block of text. I am impressed with the How can I use sed to replace a multi-line string? answered by antak and Multi-line replace answered by Bruce Ediger But I have some trouble in using them. antak already mentioned in his answer that streaming entire file ( 1h;2,$H;$!d;g; ) to the buffer is not advisable for large files, because it overload memory. I know sed can be used with the block feature to preserve the text outside the block unchanged. I want to use this feature. But if I use, sed -i '/marker1/,/marker2/s/.*/new text (code)/' filename it will insert new text (code) repeatedly for each stream. Hence I have to make the visual block as one stream, using something similar to what suggested by antak earlier, but for the block (not for entire file). As mentioned by Bruce Ediger append feature of ex which begin with a end with . (dot) can be tried, but my new text (code) contain lines begin with dot, which may be considered as the dot of append syntax. How can I use it in this situation? ex 's dd 'number of lines' may delete multiple lines, but if I have a block between /marker1/ and /marker2/ with the number of lines not fixed (varying) is to be replaced with new text (code), how to do it ?
I suggest using the c hange command(which is essentially a d elete coupled with an a ppend, though the append is only applied for the last line in the range which is exactly what you want here): sed -i '/marker1/,/marker2/c\New text 1\New text 2' filename Here using GNU sed 's syntax for in-place editing ( -i ). That c command is otherwise standard and portable. GNU sed supports: sed '/marker1/,/marker2/cNew text 1\New text 2' filename as a non-standard extension. Newline and backslash characters must be escaped (with backslash) in the replacement text.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184145/" ] }
303,667
I would like to get the sha1 checksums of all files inside a simple tar archive as a list, or a new file Without using the disk space to unpack the big tar file. Something with piping and calculating the sha1 on the fly, directing the output to /dev/null I searched google a lot and did some experiments with pipes but could not get very far. Would really save me a lot of time checking backups.
Too easy : tar xvJf myArchive.tar.xz --to-command=sha1sum The result is like this : z/z/DOCUMENTATION3c4d9df9bcbd1fb756b1aaba8dd6a2db788a8659 *-z/getnameenv.sh1b7f1ef4bbb229e4dc5d280c3c9835d9d061726a *- Or create "tarsha1.sh" with : #!/bin/bashsha1=`sha1sum`echo -n $sha1 | sed 's/ .*$//'echo " $TAR_FILENAME" Then use it this way : tar xJf myArchive.tar.xz --to-command=./tarsha1.sh The result is like this : 3c4d9df9bcbd1fb756b1aaba8dd6a2db788a8659 z/DOCUMENTATION1b7f1ef4bbb229e4dc5d280c3c9835d9d061726a z/getnameenv.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184890/" ] }
303,669
If I want to search a file in the system I use the following command: sudo find `pwd` -name filename.ext I want to make an alias for an easier word like search , so I used the command: alias search "find `pwd` -name " The problem is that the command translates the pwd part to the actual path i'm in now. When i type simply alias to see the list of aliases I see: search find /path/to/my/homedir -name How can I avoid this?
Use single quotes to avoid shell expansion at time of definition alias search='find `pwd` -name '
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182347/" ] }
303,699
I would like to know if the output of a Red-Hat based linux could be differently interpreted by a Debian based linux. To make the question even more specific, what I am after, is understanding how the "load average" from the first line of the top command on a Red-Hat system is interpreted and how to verify this by official documentation ro code. [There are many ways to approach this subject, all of which are acceptable answers to the question] One potential approach, would be to find where this information is officially documented. Another one, would be to find the code version that top is built from in the specific distribution and version I am working on. The command output I am getting is: top - 13:08:34 up 1:19, 2 users, load average: 0.02, 0.00, 0.00 Tasks: 183 total, 1 running, 182 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 96.8%id, 2.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3922520k total, 788956k used, 3133564k free, 120720k buffers Swap: 2097148k total, 0k used, 2097148k free, 344216k cached In this case how can I interpret the load average value? I have managed to locate that the average load is about the last minute, from one documentation source and that it should be interpreted after being multiplied with 100, by another documentation source. So, the question is: Is it 0.02% or 2% loaded? Documentation sources and versions: The first one stars with TOP(1) Linux User’s Manual TOP(1) NAME top - display Linux tasks Source: man top in my RedHat distribution Ubuntu also has the version with "tasks" that does not explain the load average in: http://manpages.ubuntu.com/manpages/precise/man1/top.1.html The second one starts with TOP(1) User Commands TOP(1)NAME toptop - display Linux processes Source: http://man7.org/linux/man-pages/man1/top.1.htm This one starts with: TOP(1)NAMEtop - display and update information about the top cpu processes Source: http://www.unixtop.org/man.shtml The first one, can be seen by man top in RHEL or in online ubuntu documentation and it does not have any explanation for the output format (nor about the load average in which I am interested in). The second one, contains a brief explanation, pointing out that the load average has to do with the last 1 minute, but nothing about the interpretation of its value! I quote directly from the second source: 2a. UPTIME and LOAD Averages This portion consists of a single line containing: program or window name, depending on display mode current time and length of time since last boot total number of users system load avg over the last 1, 5 and 15 minutes So, if this explanation is indeed correct, it is just enough to understand that the load average is about the last 1 minute. But it does not explain the format of the number. In the third explanation, it says that: When specifying numbers for load averages, they should be multiplied by 100. This explanation suggests that 0.02 means 2% and not 0.02%. But is this correct? Additionally, is it correct for all distributions of linux and potentially different implementations of top ? To find the answer to this question, I tried to go through the code by searching it online. But I found, at least, two different version of top related to RHEL out there! the builtin-top.c and the refactored top.c . Both copyrighted by Red-Hat as the notice says in the beginning of the code and thus seems logical that RHEL uses one of these. http://lxr.free-electrons.com/source/tools/perf/builtin-top.c http://lxr.free-electrons.com/source/tools/perf/util/top.c So, before delving into that much code, I wanted an opinion about where to focus to form an accurate understanding on how cpu load is interpreted? From information given in the answers below, in addition to some personal search, I have found that: The top that I am using is contained in the package procps-3.2.8. Which can be verified by using top -v . In the version of procps-3.2.8 that I have downloaded from the official website it seems that the tool uptime get its information from the procfs file /proc/loadavg directly (not utilizing the linux function getloadavg() ). Now for the top command it also does not use the function getloadavg() . I managed to verify that the top does indeed the same things as the uptime tool to show the load averages. It actually calls the uptime tool's function, which gets its information from the procfs file /proc/loadavg . So, everything points to the /proc/loadavg file! Thus, to form an accurate understanding of the load average produced by top , one must read the kernel code to see how the file loadavg is written. There is also an excellent article pointed out in one of the answers that provides a layman's terms explanation of the three values of loadavg . So, despite the fact that all answers have been equally useful and helpful, I am going to mark the one that pointed to the article http://www.linuxjournal.com//article/9001 as "the" answer to my question.Thank you all for your contribution! Additionally from the question Understanding top and load average , I have found a link to the source code of the kernel that points to the spot where loadavg is calculated. As it seems there is a huge comment explaining the way it works, also this part of the code is in C ! The link to the code is http://lxr.free-electrons.com/source/kernel/sched/loadavg.c Again I am not trying to engage in any form of plagiarism, I am just adding this for completeness. So, I am repeating that the link to the kernel code was found from one of the answers in Understanding top and load average .
The CPU load is the length of the run queue, i.e. the length of the queue of processes waiting to be run. The uptime command may be used to see the average length of the run queue over the last minute, the last five minutes, and the last 15 minutes, just like what's usually displayed by top . A high load value means the run queue is long. A low value means that it is short. So, if the one minute load average is 0.05, it means that on average during that minute, there was 0.05 processes waiting to run in the run queue. It is not a percentage. This is, AFAIK, the same on all Unices (although some Unices may not count processes waiting for I/O, which I think Linux does; OpenBSD, for a while only, also counted kernel threads, so that the load was always 1 or more). The Linux top utility gets the load values from the kernel, which writes them to /proc/loadavg . Looking at the sources for procps-3.2.8 , we see that: To display the load averages, the sprint_uptime() function is called in top.c . This function lives in proc/whattime.c and calls loadavg() in proc/sysinfo.c . That function simply opens LOADAVG_FILE to read the load averages. LOADAVG_FILE is defined earlier as "/proc/loadavg" .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180044/" ] }
303,712
Right now, it looks like this:
Check that you have the locales package installed dpkg -l locales If not, install it apt-get install locales As root, type dpkg-reconfigure locales you can navigate that list with the up/down arrow keys, for example choose en_US-UTF-8 edit your .bashrc by adding the following lines: export LC_ALL=en_US.UTF-8export LANG=en_US.UTF-8export LANGUAGE=en_US.UTF-8 Run the locale command ,the output should be similar to this:: LANG=en_US.UTF-8LANGUAGE=en_US:enLC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL=
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184936/" ] }
303,754
I am using CentOS 7. I installed okular, which is a PDF viewer, with the command: sudo yum install okular As you can see in the picture below, it installed 37 dependent packages to install okular. But I wasn't satisfied with the features of the application and I decided to remove it. The problem is that if I remove it with the command: sudo yum autoremove okular It only removes four dependent packages. And if I remove it with the command: sudo yum remove okular It removes only one package which is okular.x86_64. Now, my question is that is there a way to remove all 37 installed packages with a command or do I have to remove all of them one by one?
Personally, I don't like yum plugins because they don't work a lot of the time, in my experience. You can use the yum history command to view your yum history. [root@testbox ~]# yum historyLoaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager, verify, versionlockID | Login user | Date and time | Action(s) | Altered----------------------------------------------------------------------------------19 | Jason <jason> | 2016-06-28 09:16 | Install | 10 You can find info about the transaction by doing yum history info <transaction id> . So: yum history info 19 would tell you all the packages that were installed with transaction 19 and the command line that was used to install the packages. If you want to undo transaction 19, you would run yum history undo 19 . Alternatively, if you just wanted to undo the last transaction you did (you installed a software package and didn't like it), you could just do yum history undo last
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184941/" ] }
303,755
I have a text file of numbered entries: 1. foo2. bar 100%3. kittens4. eat cake5. unicorns6. rainbows and so on up to some large number. Then after an empty line, a new block starts from 1. I insert a new entry, replacing, say, 4. and I need to renumber all the subsequent entries in the block: 1. foo2. bar 100%3. kittens4. sunshine <5. eat cake6. unicorns7. rainbows
You can always add your new entry with a x. newentry syntax and renumber everything afterwards with something like: awk -F . -v OFS=. '{if (NF) $1 = ++n; else n = 0; print}' -F . : sets the field separator to . 1 -v OFS=. : same for the output field separator ( -F . is short for -v FS=. ). {...} : no condition so the code inside {...} is run for each line if (NF) , if the number of fields is greater than 0. With FS being . , that means if the current line contains at least one . . We could also make it if (length) to check for non-empty lines. $1 = ++n : set the first field to an incremented n (initially 0, then 1, then 2...). else n = 0 : else (when NF == 0) reset n to 0. print : print the (possibly modified) line. 1 The syntax is -F <extended-regular-expression> but when <extended-regular-expression> is a single character, that is not taken as a regular expression (where . means any character) but as that character instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165104/" ] }
303,771
I have granted a group permission to run certain commands with no password via sudo. When one of the users makes a typo or runs the wrong command the system prompts them for their password and then they get an error. This is confusing for the user so I'd like to just display an error instead of prompting them for a password. Is this possible? Here is an example of my sudoers file: %mygroup ALL=(ALL) NOPASSWD:/usr/local/bin/myscript.sh * Example when they run the wrong script: # sudo /usr/local/bin/otherscript.sh[sudo] password for user:Sorry, user user is not allowed to execute '/usr/local/bin/otherscript.sh' as root on <hostname>. Desired output: Sorry, user user is not allowed to execute '/usr/local/bin/otherscript.sh' as root on <hostname>. Please check the command and try again. Note the lack of password prompt. My google-fu has failed me and only returns results on not asking for a password when the user is permitted to run the command.
From a quick read of sudo(8) -n The -n (non-interactive) option prevents sudo from prompting the user for a password. If a password is required for the command to run, sudo will display an error message and exit. And for the doubters: # grep jdoe /etc/sudoersjdoe ALL=(ALL) NOPASSWD: /bin/echo# Tested thusly: % sudo echo allowedallowed% sudo -n ed sudo: a password is required% sudo ed We trust you have received the usual lecture from the local SystemAdministrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.Password: So an alias for sudo for these folks would likely do the trick, to prevent the password prompt. Now why this requires custom compiling sudo , I don't know, I just read the manual.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184982/" ] }
303,787
I'm trying (with minimal success at present) to setup the Dovecot mail server on my Fedora 24 server. I've installed Dovecot and set the conf file up, all fine. But when I run: systemctl restart dovecot After editing the conf file I get this message Job for dovecot.service failed because the control process exited with error code. See "systemctl status dovecot.service" and "journalctl -xe" for details Running systemctl status dovecot.service gives me a different error [root@fedora app]# systemctl status dovecot.service● dovecot.service - Dovecot IMAP/POP3 email server Loaded: loaded (/usr/lib/systemd/system/dovecot.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2016-08-16 15:02:30 UTC; 37min ago Docs: man:dovecot(1) http://wiki2.dovecot.org/ Process: 11293 ExecStart=/usr/sbin/dovecot (code=exited, status=89) Process: 11285 ExecStartPre=/usr/libexec/dovecot/prestartscript (code=exited, status=0/SUCCESS)Aug 16 15:02:30 fedora dovecot[11293]: Error: service(imap-login): listen(*, 993) failed: Address already in useAug 16 15:02:30 fedora dovecot[11293]: master: Error: service(imap-login): listen(*, 993) failed: Address already in useAug 16 15:02:30 fedora dovecot[11293]: Error: service(imap-login): listen(::, 993) failed: Address already in useAug 16 15:02:30 fedora dovecot[11293]: master: Error: service(imap-login): listen(::, 993) failed: Address already in useAug 16 15:02:30 fedora dovecot[11293]: Fatal: Failed to start listenersAug 16 15:02:30 fedora dovecot[11293]: master: Fatal: Failed to start listenersAug 16 15:02:30 fedora systemd[1]: dovecot.service: Control process exited, code=exited status=89Aug 16 15:02:30 fedora systemd[1]: Failed to start Dovecot IMAP/POP3 email server.Aug 16 15:02:30 fedora systemd[1]: dovecot.service: Unit entered failed state.Aug 16 15:02:30 fedora systemd[1]: dovecot.service: Failed with result 'exit-code'. I tried running lsof -i | grep 993 but this yields no processes. Any idea how to fix this?
netstat is your friend when you're trying to troubleshoot a lot of network-related problems. To find a listening port, I would use netstat -tulpn | grep :<port number> For example, to find what pids are listening on port 22, I would run: netstat -tulpn | grep :22tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3062/sshd That tells me that sshd with pid 3062 is listening on port 22.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
303,788
I'm trying to write a simple shell script for launching commands with urxvt. The idea is this (not the complete script, just the idea): PRMPT="read -r CMD"urxvt -g 55x6-20+20 -e $PRMPTCMD There are two problems with this script. The first one is that read is not fit for this kind of task, as it would ignore options of a command (if I write echo hello read would assign echo to CMD and ignore hello ). The second one, which is the one that puzzles me most, is that urxvt -e exits immediately and does not wait for my input. I figure that it has to do with the fact that read is a builtin function, but for example urxvt -e echo hello works fine. Does anybody have any suggestions on how to change the script?
netstat is your friend when you're trying to troubleshoot a lot of network-related problems. To find a listening port, I would use netstat -tulpn | grep :<port number> For example, to find what pids are listening on port 22, I would run: netstat -tulpn | grep :22tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3062/sshd That tells me that sshd with pid 3062 is listening on port 22.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184990/" ] }
303,811
I've stumbled upon surprising (for me) permission behavior on FreeBSD. Let's say I'm operating as non-root user. I create a file, set its permission on read-only and then try to write into it: $ touch f$ chmod 400 f$ ls -l f-r-------- 1 user wheel f$ echo a >> tt: Permission denied. So far so good. Now I do the same as root and it writes into the file: # ls -l f2-r-------- 1 root wheel f2# echo a >> f2# echo $?0 Is it a bug or intended behavior? Can I safely assume that this would work so on any Unix & Linux?
It's normal for root to be able to override permissions in this manner. Another example is root being able to read a file with no read access: $ echo hello > tst$ chmod 0 tst$ ls -l tst---------- 1 sweh sweh 6 Aug 16 15:46 tst$ cat tstcat: tst: Permission denied$ sudo cat tsthello Some systems have the concept of immutable files. eg on FreeBSD: # ls -l tst-rw-r--r-- 1 sweh sweh 6 Aug 16 15:50 tst# chflags simmutable tst# echo there >> tsttst: Operation not permitted. Now even root can't write to the file. But, of course, root can remove the flag: # chflags nosimmutable tst# echo there >> tst# cat tsthellothere With FreeBSD you can go a step further and set a kernel flag to prevent root from removing the flag: # chflags simmutable tst# sysctl kern.securelevel=1kern.securelevel: -1 -> 1# chflags nosimmutable tstchflags: tst: Operation not permitted Now no one, not even root can change this file. (The system needs rebooting to reduce the securelevel).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/303811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66424/" ] }
303,881
I'm on Ubuntu 15.04 and today I've been reading an article about Linux security from this link. Everything went good until the part of UID 0 Account Only root should have the UID 0. Another account with that UID is often synonymous to backdoor. When running the command they gave me, I found out there were another root account. Just after that I disabled the account as the article do but I'm sort of afraid of this account, I can find him on /etc/passwd rootk:x:0:500::/:/bin/false And in /etc/shadow rootk:!$6$loVamV9N$TorjQ2i4UATqZs0WUneMGRCDFGgrRA8OoJqoO3CCLzbeQm5eLx.VaJHeVXUgAV7E5hgvDTM4BAe7XonW6xmup1:16795:0:99999:7::1: I tried to delete this account using userdel rootk but got this error ; userdel: user rootk is currently used by process 1 The process 1 is systemd. Could anyone give me some advice please ? Should I userdel -f ? Is this account a normal root account ?
Processes and files are actually owned by user ID numbers, not user names. rootk and root have the same UID, so everything owned by one is also owned by the other. Based on your description, it sounds like userdel saw every root process (UID 0) as belonging rootk user. According to this man page , userdel has an option -f to force removal of the account even if it has active processes. And userdel would probably just delete rootk 's passwd entry and home directory, without affecting the actual root account. To be safer, I might be inclined to hand-edit the password file to remove the entry for rootk , then hand-remove rootk 's home directory. You may have a command on your system named vipw , which lets you safely edit /etc/passwd in a text editor.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185068/" ] }
303,915
I have the following file: $ cat disk.outdisk0fcs0texttexttextdisk1fcs1texttexttexttext... What I am trying to achieve is match "disk" + "fcs" and then print the pair in one line, like this: disk0,fcs0disk1,fcs1... So I am matching "disk" and "fcs" with awk and changing the output record separator to ",".` $ awk '/disk|fcs/' ORS="," disk.outdisk0,fcs0,disk1,fcs1, The problem is, it will print all the matches on one line and with a trailing , .How can I print only per match in one line? Like this: disk0,fcs0disk1,fcs1...
You have to save the "disk" line (without printing it) until you find the next "fcs" line: awk '/disk/{ DISK=$0; next } /fcs/{ print DISK "," $0 }' The problem with your approach is that it prints either line matching "disk" or "fcs", without combining those lines. Edit: the script of sp asic is more robust, in that it ignores disk3textfcs3 My script would happily print "disk3,fcs3" in this case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303915", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185100/" ] }
303,926
I got a bash script essentially runnning this: #!/bin/bash[...] while true; do str="broadcast "`randArrayElement "messages[@]"` server_send_message $str sleep $interval done Now I want to run this as a systemd service, my service script looks like this: [Unit]Description=AnnouncerAfter=network.target[Service]ExecStart=/usr/local/bin/somescript &; disownExecStop=/usr/bin/kill -9 `cat /tmp/somescript.pid`Type=forkingPIDFile=/tmp/somescript.pid[Install]WantedBy=default.target Unfortunately when I run this service via service somescript start it is working but because of the while true loop my terminal is stuck in starting the script: ● somescript.service - somescript service Loaded: loaded (/etc/systemd/system/somescript.service; disabled; vendor preset: enabled) Active: activating (start) since Wed 2016-08-17 12:22:34 CEST; 43s ago Control: 17395 (somescript) CGroup: /system.slice/somescript.service ├─17395 /bin/bash /usr/local/bin/somescript &; disown └─17409 sleep 600 How can I run this script as a service without being stuck in "starting" / the while true loop?
You need to let systemd work for you. Let it handle the forking at the start and the killing of the process. Eg replace your service part by [Service]Type=simpleExecStart=/usr/local/bin/somescriptPIDFile=/tmp/somescript.pid then you can use systemctl start , status and stop . You must remember that the lines in systemd are NOT interpreted by the shell, so for example your &; is merely passed as another parameter of 2 characters to your script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/303926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94025/" ] }
303,960
How can I refer to a string by index in sh/bash? That is, basically splitting it. I am trying to strip 5 characters of a file name. All names have the structure: name_nr_code. I am trying to remove the 5 alphanumeric code bit. name_nr_ is always 10 characters. Is there a thing like; for i in * ; do mv "$i" "$i"[:10] ; done
Simple as this. (bash) for i in * ; do mv -- "$i" "${i:0:5}" ; done Voila. And an explanation from Advanced Bash-Scripting Guide ( Chapter 10. Manipulating Variables ) , (with extra NOTE s inline to highlight the errors in that manual): Substring Extraction ${string:position} Extracts substring from $string at $position . If the $string parameter is "*" or "@", then this extracts the positional parameters, starting at $position . ${string:position:length} Extracts $length characters of substring from $string at $position . NOTE missing quotes around parameter expansions! echo should not be used for arbitrary data. stringZ=abcABC123ABCabc# 0123456789.....# 0-based indexing.echo ${stringZ:0} # abcABC123ABCabcecho ${stringZ:1} # bcABC123ABCabcecho ${stringZ:7} # 23ABCabc echo ${stringZ:7:3} # 23A # Three characters of substring.# Is it possible to index from the right end of the string?echo ${stringZ:-4} # abcABC123ABCabc# Defaults to full string, as in ${parameter:-default}.# However . . . echo ${stringZ:(-4)} # Cabcecho ${stringZ: -4} # Cabc# Now, it works.# Parentheses or added space "escape" the position parameter. The position and length arguments can be "parameterized," that is, represented as a variable, rather than as a numerical constant. If the $string parameter is "*" or "@", then this extracts a maximum of $length positional parameters, starting at $position . echo ${*:2} # Echoes second and following positional parameters.echo ${@:2} # Same as above.echo ${*:2:3} # Echoes three positional parameters, starting at second. NOTE : expr substr is a GNU extension. expr substr $string $position $length Extracts $length characters from $string starting at $position . stringZ=abcABC123ABCabc# 123456789......# 1-based indexing.echo `expr substr $stringZ 1 2` # abecho `expr substr $stringZ 4 3` # ABC NOTE : That echo is redundant and makes it even less reliable. Use expr substr + "$string1" 1 2 . NOTE : expr will return with a non-zero exit status if the output is 0 (or -0, 00...). BTW. The book is present in the official Ubuntu repository as abs-guide .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63603/" ] }
303,961
I have a file sample_init.ora There are 2 lines in the file. db_create_file_dest="DATAG"control_files=('DATAG/DBNAME/controlfile/control01.ctl','DATAG/DBNAME/controlfile/control02.ctl') Now, I would like to change only the 2nd line from DATAG to DATAC2. Is there a way? I tried many SED/AWK forums but left with no options. A SED based reply would be HIGHLY regarded.
Simple as this. (bash) for i in * ; do mv -- "$i" "${i:0:5}" ; done Voila. And an explanation from Advanced Bash-Scripting Guide ( Chapter 10. Manipulating Variables ) , (with extra NOTE s inline to highlight the errors in that manual): Substring Extraction ${string:position} Extracts substring from $string at $position . If the $string parameter is "*" or "@", then this extracts the positional parameters, starting at $position . ${string:position:length} Extracts $length characters of substring from $string at $position . NOTE missing quotes around parameter expansions! echo should not be used for arbitrary data. stringZ=abcABC123ABCabc# 0123456789.....# 0-based indexing.echo ${stringZ:0} # abcABC123ABCabcecho ${stringZ:1} # bcABC123ABCabcecho ${stringZ:7} # 23ABCabc echo ${stringZ:7:3} # 23A # Three characters of substring.# Is it possible to index from the right end of the string?echo ${stringZ:-4} # abcABC123ABCabc# Defaults to full string, as in ${parameter:-default}.# However . . . echo ${stringZ:(-4)} # Cabcecho ${stringZ: -4} # Cabc# Now, it works.# Parentheses or added space "escape" the position parameter. The position and length arguments can be "parameterized," that is, represented as a variable, rather than as a numerical constant. If the $string parameter is "*" or "@", then this extracts a maximum of $length positional parameters, starting at $position . echo ${*:2} # Echoes second and following positional parameters.echo ${@:2} # Same as above.echo ${*:2:3} # Echoes three positional parameters, starting at second. NOTE : expr substr is a GNU extension. expr substr $string $position $length Extracts $length characters from $string starting at $position . stringZ=abcABC123ABCabc# 123456789......# 1-based indexing.echo `expr substr $stringZ 1 2` # abecho `expr substr $stringZ 4 3` # ABC NOTE : That echo is redundant and makes it even less reliable. Use expr substr + "$string1" 1 2 . NOTE : expr will return with a non-zero exit status if the output is 0 (or -0, 00...). BTW. The book is present in the official Ubuntu repository as abs-guide .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/303961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163418/" ] }
304,005
Two setuid programs, /usr/bin/bar and /usr/bin/baz , share a single configuration file foo . The configuration file's mode is 0640 , for it holds sensitive information. The one program runs as bar:bar (that is, as user bar, group bar ); the other as baz:baz . Changing users is not an option, and even changing groups would not be preferable. I wish to hard link the single configuration file as /etc/bar/foo and /etc/baz/foo . However, this fails because the file must, as far as I know, belong either to root:bar or to root:baz . Potential solution: Create a new group barbaz whose members are bar and baz . Let foo belong to root:barbaz . That looks like a pretty heavy-handed solution to me. Is there no neater, simpler way to share the configuration file foo between the two programs? For now, I am maintaining two, identical copies of the file. This works, but is obviously wrong. What would be right? For information: I have little experience with Unix groups and none with setgid(2).
You can use ACLs so the file can be read by people in both groups. chgrp bar filechmod 640 filesetfacl -m g:baz:r-- file Now both bar and baz groups can read the file. For example, here's a file owned by bin:bin with mode 640. $ ls -l foo-rw-r-----+ 1 bin bin 5 Aug 17 12:19 foo The + means there's an ACL set, so let's take a look at it. $ getfacl foo# file: foo# owner: bin# group: binuser::rw-group::r--group:sweh:r--mask::r--other::--- We can see the line group:sweh:r-- : that means people in the group sweh can read it. Hey, that's me! $ iduid=500(sweh) gid=500(sweh) groups=500(sweh) And yes, I can read the file. $ cat foodata
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/304005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18202/" ] }
304,030
This is a purely academic question, because this will never happen. If a PID is stored as type pid_t, and not some arbitrary-precision type, then there is a limit to the number of PIDs that can exist at one time. Is there a defined behavior for when PIDs overflow? Will the 65536th process kill /sbin/init and create a kernel panic? Or is there some safety measure in place?
The fork syscall should return -1, and set errno to EAGAIN . What happens after that will depend on the process that called fork . From fork : The fork() function shall fail if: [EAGAIN] The system lacked the necessary resources to create another process, or the system-imposed limit on the total number of processes under execution system-wide or by a single user {CHILD_MAX} would be exceeded.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182201/" ] }
304,032
I start GNU Parallel with --tmux which will start a tmux window per job. I would like to join all windows as panes with the size nicely distributed between them (like tiled view: C-b M-5). I tried doing: seq 15 | parallel tmux -S /tmp/tmsF8j3K joinp -s {} -t 1 But it does not distribute the height evenly resulting in: create pane failed: pane too small Is there a way to tell a window to distribute height evenly when joinp ing or a way to join all windows as panes and then tile them? Maybe something using select-layout tiled ? Edit I am using this as the test program: seq 1000 | parallel --jobs 9 --tmux sleep The goal is to have the 9 running jobs shown in a nice 3x3 window when attaching to tmux. When one job dies it should be replaced by the next job. I have tried: while [ -e "$SERVER" ] ; do top=$(tmux -S $SERVER new-window -P -n all) tmux -S $SERVER list-panes -a | grep -v "^$top" | cut -d':' -f1-2 | while read p ; do tmux -S $SERVER joinp -s $p -t $top tmux -S $SERVER select-layout tiled done tmux -S $SERVER kill-pane -t $top tmux -S $SERVER select-layout tiled sleep 1done But it still gives: can't find pane X And it does not keep all windows as panes in the first window when attaching.
The fork syscall should return -1, and set errno to EAGAIN . What happens after that will depend on the process that called fork . From fork : The fork() function shall fail if: [EAGAIN] The system lacked the necessary resources to create another process, or the system-imposed limit on the total number of processes under execution system-wide or by a single user {CHILD_MAX} would be exceeded.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
304,050
I recently installed dnsmasq to act as DNS Server for my local network. dnsmasq listens on port 53 which is already in use by the local DNS stub listener from systemd-resolved . Just stopping systemd-resolved and then restart it after dnsmasq is running solves this issue. But it returns after a reboot: systemd-resolved is started with preference and dnsmasq will not start because port 53 is already in use. The first obvious question, I guess, is how do I best make systemd-resolved understand that it should not start the local DNS stub listener and thus keep port 53 for use by dnsmasq? A more interesting question, however, is how the two services are generally meant to work together. Are they even meant to work side by side or is systemd-resolved just in the way if one's using dnsmasq?
As of systemd 232 (released in 2017) you can edit /etc/systemd/resolved.conf (not /etc/resolv.conf ) and add this line: DNSStubListener=no This will switch off binding to port 53. The option is described in more details in the resolved.conf manpage . You can find the systemd version your system is running with: systemctl --version
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/304050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134509/" ] }
304,077
I have learned a lot today messing around with ssh with RSA and creating system user accounts with no password, no login etc etc. What I was trying to do was create a user with a home directory needed for ~/.ssh/ and a password (needed for initial ssh setup) But I can't seem to get it set-up correctly. I know about using ssh-keygenssh-copy-id user@remotehost This is simple for RSA and I know how to create a user with say useradd -r newuser OR adduser newuser --system --shell=/bin/falsepasswd newuserpasswd -d newuser The End Goal is a user who doesn't have a shell, or atleast can't be logged into from a remote computer, but can still be used to ssh over to another computer and run a command. Is this even possible? The REASON/GOAL is to have a user whom when the ups runs low on power, shuts down the other connected computers via ssh before shutting down the main computer. (Only one computer can connect to the UPS via USB at a time to monitor the stats). I don't want people to be able to log in via SSH with the username UPS, but I need ups to be able to ssh into remotehost without password.
Set the crypt field to * or to !! in /etc/shadow eg # adduser tst # passwd -l tstLocking password for user tst.passwd: Success# grep tst /etc/passwdtst:x:1000:1000::/home/tst:/bin/bash# grep tst /etc/shadowtst:!!:17030:0:99999:7::: At this point the user can not login because there's no valid password. Now add a command="/thing/to/do" to the beginning of the public key in the authorized_keys file eg # ls -l $PWD/authorized_keys -rw-r--r-- 1 tst tst 431 Aug 17 17:54 /home/tst/.ssh/authorized_keys# cat $PWD/authorized_keyscommand="/bin/echo hello" ssh-rsa AAAAB3NzaC1yc2E....etcetc Now this key can be used, but the only thing it can be used for is that forced command: $ ssh -i ~/.ssh/id_rsa tst@test1helloConnection to test1 closed. If you try to do anything else it'll fail, and still force the same command $ ssh -i ~/.ssh/id_rsa tst@test1 reboot hello$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130767/" ] }
304,087
I am wondering if there are in linux file systems supporting for each file the archive bit https://en.m.wikipedia.org/wiki/Archive_bit I need something with the same logic in native f.s., like ext3, not fat neither ntfs.
Linux doesn't have an exact equivalent of the DOS/Windows archive bit, but you can make something similar. Modern Linux systems support custom file attributes , at least on ext4 and btrfs. You can use getfattr to list them and setfattr to set them. Custom attributes are extended attributes in the user namespace, i.e. with a name of that starts with the five characters user. . $ touch foo$ getfattr foo$ setfattr -n user.archive -v "$(date +%s -r foo)" foo$ getfattr -d foo# file: foouser.archive="1471478895" You can use a custom attribute if you wish. The value can be any short string (how much storage is available depends on the filesystem and on the kernel version; a few hundred bytes should be fine). Here, I use the file's timestamp; a modification would update the actual timestamp but not the copy in the custom attribute. Note that if the file is modified by deleting it and replacing it with a new version, as opposed to overwriting the existing file, the custom attributes will disappear. This should be fine for your purpose: if the attribute is not present then the file should be backed up. Incremental backup programs in the Unix world don't use custom attributes. What they do is to compare the timestamp of the file with the timestamp of the backup, and back up the file if it's changed. This is more reliable because it takes the actual state of the backup into account — backups made solely according to the system state are more prone to missing files due to a backup disappearing or due to mistakes made when maintaining the attribute.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12985/" ] }
304,091
Is there a mechanism built-in to CentOS 7 to configure network interfaces with the classic network scripts, while allowing for wildcards in the device name? Something along the lines of: /etc/sysconfig/network-scripts/ifcfg-* ... would apply to every detected device. The use case here is that I'm building a CentOS 7 image which will be used on various hardware. Some hardware has multiple NICs, others a single NIC. Therefore, when the image boots for the first time, any pre-configured scripts in /etc/sysconfig/network-scripts/ don't necessarily match the current device names. The actual ifcfg script and network isn't exotic, it's a simple IPv4/DHCP network. A couple things I'm trying to avoid (if possible): NetworkManager. Changing the default interface names from udev. Pre-configuration is the main goal here. Thanks!
Linux doesn't have an exact equivalent of the DOS/Windows archive bit, but you can make something similar. Modern Linux systems support custom file attributes , at least on ext4 and btrfs. You can use getfattr to list them and setfattr to set them. Custom attributes are extended attributes in the user namespace, i.e. with a name of that starts with the five characters user. . $ touch foo$ getfattr foo$ setfattr -n user.archive -v "$(date +%s -r foo)" foo$ getfattr -d foo# file: foouser.archive="1471478895" You can use a custom attribute if you wish. The value can be any short string (how much storage is available depends on the filesystem and on the kernel version; a few hundred bytes should be fine). Here, I use the file's timestamp; a modification would update the actual timestamp but not the copy in the custom attribute. Note that if the file is modified by deleting it and replacing it with a new version, as opposed to overwriting the existing file, the custom attributes will disappear. This should be fine for your purpose: if the attribute is not present then the file should be backed up. Incremental backup programs in the Unix world don't use custom attributes. What they do is to compare the timestamp of the file with the timestamp of the backup, and back up the file if it's changed. This is more reliable because it takes the actual state of the backup into account — backups made solely according to the system state are more prone to missing files due to a backup disappearing or due to mistakes made when maintaining the attribute.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185224/" ] }
304,133
I was trying to execute new line using echo and tried following two commands: First command: echo $'Hello World\nThis is a new line' Response: Hello WorldThis is a new line Second command: echo $"Hello World\nThis is a new line" Response: Hello World\nThis is a new line My question is what's the difference between string wrapped with $' ' vs string wrapped with $" " in bash 's echo ?
As explained here , the syntax $'string' specifies a C-style string which includes magic escaped characters, such as \n for a newline. $"string" is for I18N expansion, which has no such magic escapes. Note that these are distinct from the more common "string" (weak quoting) and 'string' (strong quoting).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158650/" ] }
304,177
I have a CSV file which is in binary character set but I have to convert to UTF-8 to process in HDFS (Hadoop). I have used the below command to check characterset. file -bi filename.csv Output : application/octet-stream; charset=binary when I try to convert the file from binary to UTF-8 it throws error. iconv -f binary -t utf-8 fiename.csviconv: conversion from binary' is not supportedTry iconv --help' or iconv --usage' for more information. can anyone please help me to understand is it possible to convert or not, I can able to see the data using head command. What does it mean , Binary means non-readable but how head command or notepad can read the data. od -tc < filename.csv | head 0000000 357 273 277 | | R e q u e s t _ I D #0000020 D # T y p e # D # S u b m i t t0000040 e r # D # S h o r t _ D e s c r0000060 i p t i o n # D # L o g _ T e x0000100 t # D # S t a t u s # D # A s s0000120 i g n e d _ T o # D # A s s i g0000140 n e e # D # C r e a t e _ D a t0000160 e # D # F o r w T o E x t H D #0000200 D # L a s t _ M o d i f i e d _0000220 B y # D # L o g _ I D # D # L o
"binary" isn't an encoding (character-set name). iconv needs an encoding name to do its job. The file utility doesn't give useful information when it doesn't recognize the file format. It could be UTF-16 for example, without a byte-encoding-mark (BOM). notepad reads that. The same applies to UTF-8 (and head would display that since your terminal may be set to UTF-8 encoding, and it would not care about a BOM). If the file is UTF-16, your terminal would display that using head because most of the characters would be ASCII (or even Latin-1), making the "other" byte of the UTF-16 characters a null. In either case, the lack of BOM will (depending on the version of file ) confuse it. But other programs may work, because these file formats can be used with Microsoft Windows as well as portable applications that may run on Windows. To convert the file to UTF-8, you have to know which encoding it uses, and what the name for that encoding is with iconv . If it is already UTF-8, then whether you add a BOM (at the beginning) is optional. UTF-16 has two flavors, according to which byte is first. Or you could even have UTF-32. iconv -l lists these: ISO-10646/UTF-8/ISO-10646/UTF8/UTF-7//UTF-8//UTF-16//UTF-16BE//UTF-16LE//UTF-32//UTF-32BE//UTF-32LE//UTF7//UTF8//UTF16//UTF16BE//UTF16LE//UTF32//UTF32BE//UTF32LE// "LE" and "BE" refer to little-end and big-end for the byte-order. Windows uses the "LE" flavors, and iconv likely assumes that for the flavors lacking "LE" or "BE". You can see this using an octal (sic) dump: $ od -bc big-end0000000 000 124 000 150 000 165 000 040 000 101 000 165 000 147 000 040 \0 T \0 h \0 u \0 \0 A \0 u \0 g \0 0000020 000 061 000 070 000 040 000 060 000 065 000 072 000 060 000 061 \0 1 \0 8 \0 \0 0 \0 5 \0 : \0 0 \0 10000040 000 072 000 065 000 067 000 040 000 105 000 104 000 124 000 040 \0 : \0 5 \0 7 \0 \0 E \0 D \0 T \0 0000060 000 062 000 060 000 061 000 066 000 012 \0 2 \0 0 \0 1 \0 6 \0 \n0000072$ od -bc little-end0000000 124 000 150 000 165 000 040 000 101 000 165 000 147 000 040 000 T \0 h \0 u \0 \0 A \0 u \0 g \0 \00000020 061 000 070 000 040 000 060 000 065 000 072 000 060 000 061 000 1 \0 8 \0 \0 0 \0 5 \0 : \0 0 \0 1 \00000040 072 000 065 000 067 000 040 000 105 000 104 000 124 000 040 000 : \0 5 \0 7 \0 \0 E \0 D \0 T \0 \00000060 062 000 060 000 061 000 066 000 012 000 2 \0 0 \0 1 \0 6 \0 \n \00000072 Assuming UTF-16LE, you could convert using iconv -f UTF-16LE// -t UTF-8// <input >output
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304177", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84560/" ] }
304,199
I tried everything mentioned in this solution Why am I still getting a password prompt with ssh with public key authentication? , but still getting prompt for password. My local log: ssh -vvv srvFlink@remoteHostdebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: preferred publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/srvFlink/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /home/srvFlink/.ssh/id_dsadebug3: no such identity: /home/srvFlink/.ssh/id_dsa: No such file or directorydebug1: Trying private key: /home/srvFlink/.ssh/id_ecdsadebug3: no such identity: /home/srvFlink/.ssh/id_ecdsa: No such file or directorydebug1: Trying private key: /home/srvFlink/.ssh/id_ed25519debug3: no such identity: /home/srvFlink/.ssh/id_ed25519: No such file or directorydebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: ,passworddebug3: authmethod_is_enabled passworddebug1: Next authentication method: passwordsrvFlink@remoteHost's password: Remote machine file permission: drwx------. 2 srvFlink srvFlink 58 Aug 18 04:46 .ssh-rw-------. 1 srvFlink srvFlink 1679 Aug 18 04:41 id_rsa-rw-r--r--. 1 srvFlink srvFlink 406 Aug 18 04:41 id_rsa.pub-rw-rw-r--. 1 srvFlink srvFlink 406 Aug 18 04:45 authorized_keysdrwx------. 2 srvFlink srvFlink 58 Aug 18 04:46 .drwx------. 4 srvFlink srvFlink 4096 Aug 18 05:14 .. In /etc/selinux/config file I have. SELINUX=permissiveSELINUXTYPE=targeted Content of id_rsa.pub of my local machine is there in the Remote machine ~/.ssh/authorized_keys Content of /etc/ssh/sshd_config is same in both of the machine. What might be the issue? EDIT Looks like file permission issue: $ journalctl _COMM=sshdAug 18 06:54:53 localhost sshd[8891]: error: Could not load host key: /etc/ssh/ssh_host_dsa_keyAug 18 06:54:53 localhost sshd[8891]: Authentication refused: bad ownership or modes for file /home/srvFlink/.ssh/authorized_keysAug 18 06:54:56 localhost sshd[8891]: Connection closed by remotehost [preauth]
-rw-rw-r--. 1 srvFlink srvFlink 406 Aug 18 04:45 authorized_keys should be -rw-r--r--. 1 srvFlink srvFlink 406 Aug 18 04:45 authorized_keys as noted in the post you linked in your question , where the accepted answer reads in part "Your home directory ~, your ~/.ssh directory and the ~/.ssh/authorized_keys file on the remote machine must be writable only by you" You also don't post the permissions on your home directory in the question; ensure that those are also not group- or other-writable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185314/" ] }
304,200
I'm trying to list all files and select only certain from them. ls -1 return me the list abc abcdrm_0818-051752-753-06 after that I want to select only by this criteria to return the 3rd line so I do: ls -1 | grep -E "^rm_.*" but I get nothing. I'm trying to recognize the first character of 3rd line in this way: var=$(ls -1 | grep -E "rm_")echo $var //returns rm_0818-051752-753-06echo ${var:0:1} //returns some strange symbol 001B in square Can you explain me this behavior and how could I grep by first character ^? Thanks
ls -1 | grep -E "^rm_.*" looks good and should work. Possible reason why it doesn't work is alias bound to ls command in your profile. To ensure, try /bin/ls -1 | grep -E "^rm_.*"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179072/" ] }
304,263
I have to send a POST request to some service with JSON payload, and it includes some user input. That input variable needs to be JSON-encoded to prevent injection attacks. Code example that sends requests and parses response JSON to RESP variable: RESP=`curl --connect-timeout "10" -s -H "Content-Type: application/json" \ -X POST -d '{ "Attribute": '"'$USERINPUT'" }',\ $ENDPOINT | $JQ -r '.key'` How to sanitize, or JSON encode, $USERINPUT before creating JSON payload?
Using the jo utility , the sanitized JSON document could be constructed using data=$( jo Attribute="$USERINPUT" ) You would then use -d "$data" with curl to pass this to the endpoint. Old (but still valid) answer using jq instead of jo : Using jq : USERINPUT=$'a e""R<*&\04\n\thello!\'' This string has a couple of double quotes, an EOT character, a newline, a tab and a single quote, along with some ordinary text. data="$( jq -nc --arg Attribute "$USERINPUT" '$ARGS.named' )" This builds a JSON object containing the user data as the value for the lone Attribute field. From this, we get {"Attribute":"ae\"\"R<*&\u0004\n\thello!'"} as the value in $data . This can now be used in your call to curl : RESP="$( curl --connect-timeout "10" -s \ -H "Content-Type: application/json" \ -X POST -d "$data" \ "$ENDPOINT" | jq -r '.key' )"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185379/" ] }
304,264
How do I append a timestamp to the prompt when a command is started? I'm looking for a solution halfway between a static prompt and continuous clock . The main difference being the static prompt will show the time when the prompt was loaded. If I let it sit for a few minutes, then enter a command, that's when I want another timestamp to be added. My goal is to be able to easily see the time difference taken to execute commands, and how long a prompt sat idle. The problem with the continuous clock answer, aside from the fact that I want to use bash, is that it would lose the start timestamp because it would refresh the whole thing. For example, I would like my prompt to look like this when I open a new shell: 10:30:21 jeff ~ $ Then let's say it just sits there for a minute before I actually finish typing a command and hit Enter 10:30:21 jeff ~ 10:31:28 $ ./long_running.sh10:36:52 jeff ~ $ Notice it appends another timestamp showing when I actually executed the command. And I can easily see that the command took about 5 minutes to run by subtracting from the start timestamp on the next prompt.
There's a DEBUG trap that can be called before every command is run eg trap 'echo -e "\nStarted at: $(date)\n"' DEBUG So if I do that: $ trap 'echo -e "\nStarted at: $(date)\n"' DEBUG$ pwdStarted at: Thu Aug 18 11:59:33 EDT 2016/home/sweh$ echo helloStarted at: Thu Aug 18 11:59:35 EDT 2016hello$ sleep 100Started at: Thu Aug 18 11:59:37 EDT 2016 It's not rewriting the prompt, but you can see how it can be made to output stuff before every comamnd. You can make the trap function as complicated as you need.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148062/" ] }
304,266
I'm in the process of porting the embedded system for a picozed-based platform from the Xilinx-v2013.4 (3.12 kernel) to the Xilinx-v2016.2 (4.4 kernel). The former version still makes use of an initial RAM disk (initrd) while the new uses an initial RAM fs (initramfs). At boot time, the console is given through the serial interface on the USB connector. I expect it to be ttyPS0. At this point already, I don't know how this relation 'console-ttyPS0' is done !? Does it come from the device tree (I don't see any thing mentioning ttyPS0) ? In the former version (in the RAM disk), it was even not configured in the "init" scripts, neither in 'mdev' configuration file. The boot process is running and then it hangs. Here is the output: Starting kernel ...Uncompressing Linux... done, booting the kernel.Booting Linux on physical CPU 0x0Linux version 4.4.0-test (pierrett@build0109-linux) (gcc version 4.8.3 20140320 (prerelease) (Sourcery CodeBench Lite 2014.05-23) ) #1 SMP PREEMPT Thu Aug 18 12:10:52 CEST 2016CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), cr=18c5387dCPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cacheMachine model: zynqbootconsole [earlycon0] enabledcma: Reserved 16 MiB at 0x3dc00000Memory policy: Data cache writeallocPERCPU: Embedded 12 pages/cpu @ef7d2000 s18240 r8192 d22720 u49152Built 1 zonelists in Zone order, mobility grouping on. Total pages: 260608Kernel command line: bootargs=console=ttyPS0,115200 root=/dev/ram initrd=0x8000000 rw earlyprintk rootwaitPID hash table entries: 4096 (order: 2, 16384 bytes)Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)Memory: 1009532K/1048576K available (4456K kernel code, 213K rwdata, 1564K rodata, 240K init, 193K bss, 22660K reserved, 16384K cma-reserved, 238976K highmem)Virtual kernel memory layout: vector : 0xffff0000 - 0xffff1000 ( 4 kB) fixmap : 0xffc00000 - 0xfff00000 (3072 kB) vmalloc : 0xf0800000 - 0xff800000 ( 240 MB) lowmem : 0xc0000000 - 0xf0000000 ( 768 MB) pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB) modules : 0xbf000000 - 0xbfe00000 ( 14 MB) .text : 0xc0008000 - 0xc05e949c (6022 kB) .init : 0xc05ea000 - 0xc0626000 ( 240 kB) .data : 0xc0626000 - 0xc065b450 ( 214 kB) .bss : 0xc065b450 - 0xc068bb54 ( 194 kB)Preemptible hierarchical RCU implementation. Build-time adjustment of leaf fanout to 32. RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=2.RCU: Adjusting geometry for rcu_fanout_leaf=32, nr_cpu_ids=2NR_IRQS:16 nr_irqs:16 16ps7-slcr mapped to f0802000L2C: platform modifies aux control register: 0x72360000 -> 0x72760000L2C: DT/platform modifies aux control register: 0x72360000 -> 0x72760000L2C-310 erratum 769419 enabledL2C-310 enabling early BRESP for Cortex-A9L2C-310 full line of zeros enabled for Cortex-A9L2C-310 ID prefetch enabled, offset 1 linesL2C-310 dynamic clock gating enabled, standby mode enabledL2C-310 cache controller enabled, 8 ways, 512 kBL2C-310: CACHE_ID 0x410000c8, AUX_CTRL 0x76760001zynq_clock_init: clkc starts at f0802100Zynq clock initsched_clock: 64 bits at 333MHz, resolution 3ns, wraps every 4398046511103nsclocksource: arm_global_timer: mask: 0xffffffffffffffff max_cycles: 0x4ce07af025, max_idle_ns: 440795209040 nsclocksource: ttc_clocksource: mask: 0xffff max_cycles: 0xffff, max_idle_ns: 537538477 nsps7-ttc #0 at f080a000, irq=18Console: colour dummy device 80x30console [tty0] enabledbootconsole [earlycon0] disabled My feeling is that the trouble comes from a wrong setting of the console. In the boot log, one can notice that the "tty0" is enabled while in the boot arguments, I expect the console on ttyPS0. Could any one explain how the correct console could be set at startup? Additional info : the device tree serial config : ps7_uart_1: serial@e0001000 { clock-names = "ref_clk", "aper_clk"; clocks = <0x2 0x18 0x2 0x29>; compatible = "xlnx,xuartps"; current-speed = <115200>; device_type = "serial"; interrupt-parent = <&ps7_scugic_0>; interrupts = <0x0 0x32 0x4>; port-number = <0x0>; reg = <0xe0001000 0x1000>; xlnx,has-modem = <0x0>;}; the boot arguments : console=ttyPS0,115200 root=/dev/ram initrd=0x8000000 rw earlyprintk rootwait the kernel serial config : ## Serial drivers# CONFIG_SERIAL_EARLYCON=y# CONFIG_SERIAL_8250 is not set## Non-8250 serial port support## CONFIG_SERIAL_AMBA_PL010 is not set# CONFIG_SERIAL_AMBA_PL011 is not set# CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST is not set# CONFIG_SERIAL_MAX3100 is not set# CONFIG_SERIAL_MAX310X is not setCONFIG_SERIAL_UARTLITE=mCONFIG_SERIAL_CORE=yCONFIG_SERIAL_CORE_CONSOLE=y# CONFIG_SERIAL_JSM is not set# CONFIG_SERIAL_SCCNXP is not set# CONFIG_SERIAL_SC16IS7XX is not set# CONFIG_SERIAL_BCM63XX is not set# CONFIG_SERIAL_ALTERA_JTAGUART is not set# CONFIG_SERIAL_ALTERA_UART is not set# CONFIG_SERIAL_IFX6X60 is not setCONFIG_SERIAL_XILINX_PS_UART=yCONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y# CONFIG_SERIAL_ARC is not set# CONFIG_SERIAL_RP2 is not set# CONFIG_SERIAL_FSL_LPUART is not set# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set# CONFIG_SERIAL_ST_ASC is not set# CONFIG_SERIAL_STM32 is not set# CONFIG_TTY_PRINTK is not set# CONFIG_HVC_DCC is not set# CONFIG_VIRTIO_CONSOLE is not set# CONFIG_IPMI_HANDLER is not set# CONFIG_HW_RANDOM is not setCONFIG_XILINX_DEVCFG=y# CONFIG_R3964 is not set# CONFIG_APPLICOM is not set# CONFIG_RAW_DRIVER is not set# CONFIG_TCG_TPM is not setCONFIG_DEVPORT=y# CONFIG_XILLYBUS is not set the "inittab" entry: ttyPS0::respawn:/sbin/getty -L ttyPS0 115200 vt100 # GENERIC_SERIAL
There's a DEBUG trap that can be called before every command is run eg trap 'echo -e "\nStarted at: $(date)\n"' DEBUG So if I do that: $ trap 'echo -e "\nStarted at: $(date)\n"' DEBUG$ pwdStarted at: Thu Aug 18 11:59:33 EDT 2016/home/sweh$ echo helloStarted at: Thu Aug 18 11:59:35 EDT 2016hello$ sleep 100Started at: Thu Aug 18 11:59:37 EDT 2016 It's not rewriting the prompt, but you can see how it can be made to output stuff before every comamnd. You can make the trap function as complicated as you need.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185375/" ] }
304,369
So I have a desktop with a fast SSD and large HDD. I am trying to get a well configured large, fast zpool out of it. I have read that I can carve separate partitions into the SSD for the ZIL and L2ARC which would seem to do what I want, except I have to manually configure how big each partition should be. What I don't like about it is that it's somewhat involved, potentially hard to reconfigure if I need to change the partitions, and it sounds like the maximum filesystem size is limited by the HDD alone since the intent is that everything on the ZIL and L2ARC has to also make it to disk, at least eventually. Also it's not clear if the L2ARC is retained after system reboot or if it has to be populated again. It also seems inefficient to have to copy data from ZIL to L2ARC if they are both on the same SSD, or even to HDD if there is currently no pressure on how much hot data I need on SSD. Alternatively, it seems I can also just have 1 partition on SSD and 1 on HDD and add them to a zpool directly with no redundancy. I have tried this, and noticed sustained read/write speeds greater than what HDD alone can muster. But I don't know if everything is just going to the SSD for now, and everything will go to HDD later once SSD is all filled up. Ideally, I would like to have ZFS transparently shuffle the data around behind the scenes to try to always keep the hot data on the SSD similarly to what L2ARC, and have a sensible amount of empty space on SSD for new writes. The ZIL should be automatically managed to be the right size and preferably live on the SSD as much as possible.If I go the manually configured ZIL + L2ARC route, it seems like the ZIL only needs to be about (10 sec * HDD write speed) big. Doing this maximizes the size of L2ARC which is good. But what happens if I add a striped disk which effectively doubles my HDD speed (and capacity)? Summary of questions if using SSD for ZIL + L2ARC: If I set up SSD for ZIL + L2ARC, how hard is it to re-set it up with different partition sizes? If I use SSD for L2ARC, is its capacity included in total available pool capacity, or is the pool capacity limited by HDD alone? Is L2ARC retained after system reboot, or does it have to be re-populated? Does data have to be copied from ZIL to L2ARC even if both are on same physical SSD? If ZIL is on SSD and there is still plenty of room for more intents to be logged, does the ZIL still automatically get flushed to SSD? If so, when/under what circumstances? Summary of questions if using SSD + HDD in a single zpool: ZFS obviously notices the difference in size between SSD and HDD partitions, but does ZFS automatically recognize the relative performance of SSD and HDD partitions? In particular, How are writes distributed across the SSD and HDD when both are relatively empty? Does ZFS try to do anything smart with data shuffling once the SSD part of the zpool fill up? In particular, If the SSD part of zpool is filled up, does ZFS ever anticipate that I will have more writes soon and tries to move data from SSD to HDD in the background? If the SSD part of zpool is filled up, and I start accessing a bunch of data off HDD, and not so much off SSD, does ZFS make any effort to swap the hot data to SSD? Finally, the most important question: Is it a good idea to set up SSD + HDD in same pool, or is there a better way to optimize my pair of drives for both speed and capacity?
While Marco's answer explained all the details correctly, I just want to focus on your last question/summary: Is it a good idea to set up SSD + HDD in same pool, or is there a better way to optimize my pair of drives for both speed and capacity? ZFS is a file system designed for large arrays with many smaller disks. Although it is quite flexible, I think it is suboptimal for your current situation and goal, for the following reasons: ZFS does no reshuffling of already written data. What you are looking for is called a hybrid drive , for example Apple's Fusion Drive allows to fuse multiple disks together and automatically selects the storage location for every block based on access history (moving data is done when there is no load on the system or on rewrite). With ZFS, you have none of that, neither automatically nor manually, your data stays were it was written initially (or is already marked for deletion). With just a single disk, you give up on redundancy and self-healing. You still detect errors, but you do not use the full capabilities of the system. Both disks in the same pool means even higher chance of data loss (this is RAID0 after all) or corruption, additionally your performance will be sub par because of the different drive sizes and drive speeds. HDD+SLOG+L2ARC is a bit better, but you need a very good SSD (better two different like Marco said, but a NVMe SSD is a good and expensive compromise) and most of the space on it is wasted: 2 to 4 GB for the ZIL are enough, and a large L2ARC only helps if your RAM is full, but needs higher amounts of RAM itself. This leads to sort of catch-22 - if you want to use L2ARC, you need more RAM, but then you can just use the RAM itself, because it is enough. Remember, only blocks are stored, so you do need not as much as you would assume by looking at plain files. Now, what are the alternatives? You could split by having two pools. One for system, one for data. This way, you have no automatic rebalance and no redundancy, but a clean system which can be extended easily and which has no RAID0 problems. Buy a second large HDD, make a mirror, use the SSD like you outlined: removes the problem of differently sized disks and disk speeds, gives you redundancy, keeps the SSD flexible. Buy n SSDs and do RAIDZ1/2/3. Smaller SSDs are pretty cheap nowadays and do not suffer slow rebuild times, making RAIDZ1 interesting again. Use another file system or volume manager with hybrid capabilities, ZFS on top if needed. This is not seen as optimal, but neither is working with two single disk vdevs in a pool... at least you get exactly what you want, and some nice things of ZFS (snapshots etc.) on top, but I wouldn't count on stellar performance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62975/" ] }
304,389
Processing text, I need to remove the newline character every two lines. Sample text: this is line oneand this is line twothe third and thefourth must be pasted too Desired output: this is line one and this is line twothe third and the fourth must be pasted too I tried a while loop, but a while loop is bad practice. Is it possible to do it using tr or any other command?
paste (also a standard POSIX simple utility like tr ) is your tool for that. Assuming you want those newline characters replaced with a space instead of just removed as in your sample: paste -d ' ' - - < file Or: paste -sd ' \n' file Replace ' ' with '\0' if you do indeed want them removed. To replace 2 out of 3: paste -sd ' \n' file 1 out of 3, starting with the second: paste -sd '\n \n' file And so on. Another good thing with paste is that it won't leave a line non-terminated. For instance, if you remove every newline in a file (as with tr -d '\n' < file or tr '\n' ' ' < file ), you end up with no line at all as lines need to be terminated with a newline character. So it's generally better to use paste instead for that (as in paste -sd '\0' file or paste -sd ' ' file ) which will add that trailing newline character necessary to have valid text.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163676/" ] }
304,391
When I start Xephyr and LXDE session inside: $ Xephyr :1 -screen 1920x1054 -nolisten tcp -reset -terminate$ DISPLAY=:1 startlxde some keys are not working properly, such as Up , PageUp , PageDown . Looking with xev , I see very funny key names: PageUp: HiraganaUp: KatakanaPageDown: Control_RLeft: Henkan_ModeDown: KP_EnterRight: Muhenkan Obviously, in normal LXDE session (without Xephyr), everything works normally. One thing that is relevant here: I am not using udev daemon on my system. (I just needed to add Option "AutoAddDevices" "Off" to /etc/X11/xorg.conf to make X work without udev . When I turn udev back on, the keys inside Xephyr work OK. But that is not a solution for me. How can I diagnose and fix this problem (without udev) ?
You can try to read the keyboard configuration of :0 with setxkbmap and to set it on :1 with xkbcomp: setxkbmap -display :0 -print | xkbcomp - :1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
304,427
I'm reading a book, Learning Unix for OS X by Dave Taylor . It says: To quickly see all of the binary executables—Unix programs—on your system, Open the Terminal, hold down the Shift key, and press Esc-? , or press Control-X followed by Shift-1 (using Shift-1 to get an exclamation mark). Before the commands are displayed in the Terminal, however, you’ll first be prompted (asked) to make a choice: $ Display all 1453 possibilities? (y or n) If you press the n key on your keyboard, you’ll be taken back to a command prompt and nothing else will happen. However, if you press the y key, you’ll see a multi-column list of Unix commands stream past in the Terminal window. However, the problem is, when I hold down Shift key and press Esc-? nothing happens. Same for Pressing Control-X followed by Shift-1 . What am I doing wrong? Is there any setting that I need to enable before using this feature? I'm using iTerm2 on Mac El Capitan. It doesn't work on the stock terminal either. Any help would be much appreciated. Thank you.
The instructions in the book are for bash. Zsh is a different program with different key bindings. In zsh, you can see a list of all commands (external, builtin, function, alias even keywords...) with: type -m '*' For just their names: whence -wm '*' | sed 's/:[^:]*$//' Or for the names of external commands only: print -rlo -- $commands:t | less $commands is an array that contains all external commands. The history modifier :t truncates the directory part of the command paths (keeps only the t ail). print -rlo to print them r aw in alphabetical o rder, one per l ine. Longer, but less cryptic: for p in "$path[@]"; do (cd ${p:-.} && ls); done | sort -u | less This can be adjusted to work in any sh-style shell: (IFS=:; for p in $PATH; do (cd ${p:-.} && ls); done) | sort -u | less (All the commands I list here assume that there are no “unusual” characters in command paths.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183120/" ] }
304,504
I have Linux a server where I would like to add few other people besides me as a root users. All the users need to access this server very seldom if ever. As a first step I set a root user a password which we all know. Then I added those users with useradd : useradd -ou 0 -g 0 user1 -d /root/user1/ -s /bin/bashuseradd -ou 0 -g 0 user2 -d /root/user2/ -s /bin/bash After that I copied a password hash of a root user to user1 and user2 in /etc/shadow . Then I created .ssh/authorized_keys file for each user(both user1 and user2 have their personal SSH public key there) into their home directory and finally configured PermitRootLogin without-password in sshd_config . There is no authorized_keys file for root user, i.e. it's not possible to log into server using user root . This is basically what I need, but maybe this setup has some (security) drawbacks and there is a better solution?
What you have done is atypical and so I would not recommended it. Most programs assume a one to one mapping between uid and username. You could potentially open yourself to weird bugs/unknown behaviour or worst security vulnerabilities - This is an unknown for me so I cannot recommend it when there are better/more typical ways to achieve your goals. Given this, the first thing I would do is delete the accounts you have created. Now, if the users require root access there is nothing majorly wrong with giving them access to the root account, assuming you/they understand the risks associated with the root account. From a security perspective the weakest point in ssh is password auth, so I highly recommend disabling that. This can be done by editing/adding the following to /etc/ssh/sshd_config ChallengeResponseAuthentication noPasswordAuthentication noUsePAM no However, there are some issues with having root privileges for longer than you require. Most notably that it is easier for a simple mistake to do more damage. Given this it is better to create some unprivileged accounts that the users can login to. Once logged into their user that can obtain root privileges when required by using su - and the root password. Now, remote attacks are generally against the root users (as well as common users names such as admin ), with password auth disabled you generally do not have to worry about this as key auth is strong enough. If you have user accounts that can obtain root privileges (as described above and below) then you can further increase security by disabling the root account entirely by add/editing the following to /etc/ssh/sshd_config . PermitRootLogin no WARNING: Ensure you have a user account that can gain root privileges or you can lock yourself out of the server. This might help protect you against misconfiguration or possible future vulnerabilities against ssh/openssl as the attacker now also requires a username to gain access to an unprivileged account where they then require a password to gain full root access. Finally, it is not best practice to share password around multiple users. If you wish to revoke someone access for any reason everyone needs to learn a new password. It is better to obtain root privileges using the user's own password. This way you simply need to lock the user's account, change the root password (as they could have modified it given they have root privileges) and everyone can continue to use their own passwords without issue (note there are more issues with locking someone out of a server they have had root access to as they could have done anything to the system, including changing other users passwords or adding their key to other users). To do that you can use sudo in place of su to allow the users to obtain root privileges with their own password as aspose to roots. To do this install sudo and add the user to the wheel or sudo group (depends on the distro you use). Users can then use sudo <command> to run a command with root privileges or sudo -i to gain a root shell. This also gives you better auditing logs as each action executed by sudo is logged against a username - assuming you do not simply sudo -i all the time. You can even take this further by restricting users to running only a subset of commands rather than giving them access to everything. This makes it easier to revoke access in the future as they are able to do less to the system. But this depends on how much you trust your users and can be annoying if you do not know everything they require access to. Given this an attacker requires a username as well as an ssh key or a vulnerability in key based auth to gain access to the server, where they can do limited damage. Then they require a password to actually gain root privileges and do real damage. Security is applied in layers, the more layers you have generally the more secure you are. But added layers also decrease usability. It is up to you to decided the balance you wish to apply to a given system. The steps I have described above will add allot of security at a minimal cost to usability. There are further steps you can take to lock down systems even more than this but for most cases this should be enough.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
304,590
I am trying to edit an Apache module on Debian (strictly, I'm doing this on Raspbian Jessie-Lite), and am broadly following the Debian build instructions : $ mkdir -p ~/src/debian; cd ~/src/debian$ apt-get source apache2-bin$ cd apache2-2.4.10$ debuild -b -uc -us And the build process takes roughly one and a half hours on an olde original Pi. Which is fine. Once!But I believe the build process is performing a make clean and so after a minor edit of a single mod_*.c file, it wants to rebuild the entire thing, which is kind of slowing down my development! I have tried adding -dc to the debuild command, but then it didn't build anything. I even tried deleting the target mod_*.so file to "encourage" it into rebuilding it, but still no! UPDATE 2016-08-21: Adding -nc to the debuild command does not cause modules to be recompiled. Here's the output from that command: $ debuild -b -uc -us -nc dpkg-buildpackage -rfakeroot -D -us -uc -b -ncdpkg-buildpackage: source package apache2dpkg-buildpackage: source version 2.4.10-10+deb8u5dpkg-buildpackage: source distribution jessie-securitydpkg-buildpackage: source changed by Salvatore Bonaccorso <[email protected]> dpkg-source --before-build apache2-2.4.10dpkg-buildpackage: host architecture armhf debian/rules builddh build --parallel --with autotools_dev fakeroot debian/rules binarydh binary --parallel --with autotools_dev dpkg-genchanges -b >../apache2_2.4.10-10+deb8u5_armhf.changesdpkg-genchanges: binary-only upload (no source code included) dpkg-source --after-build apache2-2.4.10dpkg-buildpackage: binary-only upload (no source included)Now running lintian...N: 16 tags overridden (1 error, 4 warnings, 11 info)Finished running lintian.
Add the -nc option to your debuild command line. This may expose problems in the build system or the packaging though, so be prepared. But for small fixes it usually works fine. However, as the apache2 source package uses debhelper (like many other packages), this alone is not enough, because debhelper also keeps its own journal of completed steps in separate log files for each binary package. These can be removed entirely by dh_clean . But to get debhelper redo no more than the necessary work, truncate only the relevant one by sed -i '/^dh_auto_build$/Q' debian/apache2-bin.debhelper.log before running debuild -nc .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84945/" ] }
304,655
I have a file called input_file , the following is the contents of input_file : $ cat input_file 123456789 then I run the following command: $ od -to2 input_file Output: 0000000 005061 005062 005063 005064 005065 005066 005067 0050700000020 0050710000022 My question is: What does the '005' mean in the output of od ?
The output option you chose will take 2 bytes and display the result as an octal number. So your starts with the digit 1 and the character \n . We can see this easier with od -cx : % od -cx f0000000 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 0a31 0a32 0a33 0a34 0a35 0a36 0a37 0a380000020 9 \n 0a390000022 With your od -to2 it will take those 2 characters and treat them as a 'low byte, high byte' of a 16bit number. So the number works out to 10*256+49 (the \n is ASCII 10, and is the high byte; the 1 is ASCII 49 and is the low byte). That sum is 2609. 2609, in octal, is 005061 - which is the first number in your output. (In hex it's a31, which also matches the od -cx output). So this is what you're seeing; od is converting your input into 16 bit integers and displaying them in octal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143692/" ] }
304,661
Sometimes I have strange troubles booting my computer (which runs Debian). So I issued "dmesg" command. In its output I saw a lot of errors. However, when I run extended SMART test on hard disks (using "smartctl -t long /dev/sda" command), the result is that my disks are not broken. What can be the reason of those errors? Here are the errors: (...) [ 505.918537] ata3.00: exception Emask 0x50 SAct 0x400 SErr 0x280900 action 0x6 frozen [ 505.918549] ata3.00: irq_stat 0x08000000, interface fatal error [ 505.918558] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 505.918566] ata3.00: failed command: READ FPDMA QUEUED [ 505.918579] ata3.00: cmd 60/40:50:20:5b:60/00:00:0b:00:00/40 tag 10 ncq 32768 in res 40/00:54:20:5b:60/00:00:0b:00:00/40 Emask 0x50 (ATA bus error) [ 505.918586] ata3.00: status: { DRDY } [ 505.918595] ata3: hard resetting link [ 506.410055] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 506.422648] ata3.00: configured for UDMA/133 [ 506.422679] ata3: EH complete [ 1633.123880] md: bind<sdb3> [ 1633.187966] RAID1 conf printout: [ 1633.187977] --- wd:1 rd:2 [ 1633.187984] disk 0, wo:0, o:1, dev:sda3 [ 1633.187989] disk 1, wo:1, o:1, dev:sdb3 [ 1633.188866] md: recovery of RAID array md0 [ 1633.188871] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1633.188875] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 1633.188890] md: using 128k window, over a total of 1943618560k. [ 1634.167341] ata3.00: exception Emask 0x50 SAct 0x7f80 SErr 0x280900 action 0x6 frozen [ 1634.167353] ata3.00: irq_stat 0x08000000, interface fatal error [ 1634.167361] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1634.167369] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167382] ata3.00: cmd 60/00:38:00:00:6f/02:00:01:00:00/40 tag 7 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167389] ata3.00: status: { DRDY } [ 1634.167395] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167407] ata3.00: cmd 60/00:40:00:02:6f/02:00:01:00:00/40 tag 8 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167413] ata3.00: status: { DRDY } [ 1634.167418] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167429] ata3.00: cmd 60/00:48:00:04:6f/02:00:01:00:00/40 tag 9 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167435] ata3.00: status: { DRDY } [ 1634.167439] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167451] ata3.00: cmd 60/00:50:00:06:6f/02:00:01:00:00/40 tag 10 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167457] ata3.00: status: { DRDY } [ 1634.167462] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167473] ata3.00: cmd 60/00:58:00:08:6f/02:00:01:00:00/40 tag 11 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167479] ata3.00: status: { DRDY } [ 1634.167484] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167495] ata3.00: cmd 60/00:60:00:0a:6f/02:00:01:00:00/40 tag 12 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167500] ata3.00: status: { DRDY } [ 1634.167505] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167516] ata3.00: cmd 60/80:68:00:0c:6f/00:00:01:00:00/40 tag 13 ncq 65536 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167522] ata3.00: status: { DRDY } [ 1634.167527] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167538] ata3.00: cmd 60/00:70:80:0c:6f/02:00:01:00:00/40 tag 14 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167544] ata3.00: status: { DRDY } [ 1634.167553] ata3: hard resetting link [ 1634.658816] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1634.672645] ata3.00: configured for UDMA/133 [ 1634.672696] ata3: EH complete [ 1637.687898] ata3.00: exception Emask 0x50 SAct 0x3ff000 SErr 0x280900 action 0x6 frozen [ 1637.687910] ata3.00: irq_stat 0x08000000, interface fatal error [ 1637.687918] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1637.687926] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687940] ata3.00: cmd 60/00:60:80:a7:af/02:00:02:00:00/40 tag 12 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687947] ata3.00: status: { DRDY } [ 1637.687953] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687965] ata3.00: cmd 60/00:68:80:a9:af/02:00:02:00:00/40 tag 13 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687971] ata3.00: status: { DRDY } [ 1637.687976] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687987] ata3.00: cmd 60/80:70:80:ab:af/01:00:02:00:00/40 tag 14 ncq 196608 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687993] ata3.00: status: { DRDY } [ 1637.687998] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688009] ata3.00: cmd 60/00:78:00:ad:af/02:00:02:00:00/40 tag 15 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688015] ata3.00: status: { DRDY } [ 1637.688020] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688031] ata3.00: cmd 60/80:80:00:af:af/00:00:02:00:00/40 tag 16 ncq 65536 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688037] ata3.00: status: { DRDY } [ 1637.688042] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688053] ata3.00: cmd 60/00:88:80:af:af/01:00:02:00:00/40 tag 17 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688059] ata3.00: status: { DRDY } [ 1637.688064] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688075] ata3.00: cmd 60/80:90:80:b0:af/00:00:02:00:00/40 tag 18 ncq 65536 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688081] ata3.00: status: { DRDY } [ 1637.688085] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688096] ata3.00: cmd 60/00:98:00:b1:af/02:00:02:00:00/40 tag 19 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688102] ata3.00: status: { DRDY } [ 1637.688107] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688118] ata3.00: cmd 60/00:a0:00:b3:af/01:00:02:00:00/40 tag 20 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688124] ata3.00: status: { DRDY } [ 1637.688129] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688140] ata3.00: cmd 60/00:a8:00:b4:af/01:00:02:00:00/40 tag 21 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688146] ata3.00: status: { DRDY } [ 1637.688154] ata3: hard resetting link [ 1638.179398] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1638.192977] ata3.00: configured for UDMA/133 [ 1638.193029] ata3: EH complete [ 1640.259492] md: export_rdev(sdb1) [ 1640.326109] md: bind<sdb1> [ 1640.346712] RAID1 conf printout: [ 1640.346724] --- wd:1 rd:2 [ 1640.346731] disk 0, wo:0, o:1, dev:sda1 [ 1640.346736] disk 1, wo:1, o:1, dev:sdb1 [ 1640.346893] md: delaying recovery of md1 until md0 has finished (they share one or more physical units) [ 1657.987964] ata3.00: exception Emask 0x50 SAct 0x40000 SErr 0x280900 action 0x6 frozen [ 1657.987975] ata3.00: irq_stat 0x08000000, interface fatal error [ 1657.987984] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1657.987992] ata3.00: failed command: READ FPDMA QUEUED [ 1657.988006] ata3.00: cmd 60/00:90:00:30:2e/03:00:09:00:00/40 tag 18 ncq 393216 in res 40/00:94:00:30:2e/00:00:09:00:00/40 Emask 0x50 (ATA bus error) [ 1657.988013] ata3.00: status: { DRDY } [ 1657.988022] ata3: hard resetting link [ 1658.479548] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1658.493107] ata3.00: configured for UDMA/133 [ 1658.493147] ata3: EH complete [ 1670.547791] ata3: limiting SATA link speed to 1.5 Gbps [ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen [ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error [ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547846] ata3.00: status: { DRDY } [ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547863] ata3.00: cmd 60/80:08:80:20:2e/00:00:0c:00:00/40 tag 1 ncq 65536 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547869] ata3.00: status: { DRDY } [ 1670.547875] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547886] ata3.00: cmd 60/00:10:00:21:2e/02:00:0c:00:00/40 tag 2 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547892] ata3.00: status: { DRDY } [ 1670.547896] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547907] ata3.00: cmd 60/00:18:00:23:2e/02:00:0c:00:00/40 tag 3 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547913] ata3.00: status: { DRDY } [ 1670.547918] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547929] ata3.00: cmd 60/00:20:00:25:2e/01:00:0c:00:00/40 tag 4 ncq 131072 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547935] ata3.00: status: { DRDY } [ 1670.547940] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547951] ata3.00: cmd 60/00:28:00:26:2e/02:00:0c:00:00/40 tag 5 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547957] ata3.00: status: { DRDY } [ 1670.547961] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547972] ata3.00: cmd 60/00:30:00:28:2e/02:00:0c:00:00/40 tag 6 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547978] ata3.00: status: { DRDY } [ 1670.547987] ata3: hard resetting link [ 1671.039264] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310) [ 1671.053386] ata3.00: configured for UDMA/133 [ 1671.053444] ata3: EH complete [ 2422.512002] md: md0: recovery done. [ 2422.547344] md: recovery of RAID array md1 [ 2422.547355] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 2422.547360] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 2422.547378] md: using 128k window, over a total of 4877312k. [ 2422.668465] RAID1 conf printout: [ 2422.668474] --- wd:2 rd:2 [ 2422.668480] disk 0, wo:0, o:1, dev:sda3 [ 2422.668486] disk 1, wo:0, o:1, dev:sdb3 [ 2469.990451] md: md1: recovery done. [ 2470.049986] RAID1 conf printout: [ 2470.049997] --- wd:2 rd:2 [ 2470.050003] disk 0, wo:0, o:1, dev:sda1 [ 2470.050009] disk 1, wo:0, o:1, dev:sdb1 [ 3304.445149] PM: Hibernation mode set to 'platform' [ 3304.782375] PM: Syncing filesystems ... done. [ 3307.028591] Freezing user space processes ... (elapsed 0.001 seconds) done. (...)
First, keep in mind that SMART saying that your drive is healthy doesn't necessarily mean that the drive is healthy. SMART reports are an aid , not an absolute truth. If all you are interested in is what to do, rather than why, then feel free to scroll down to the last few paragraphs; however, the interim text will tell you why I think what I propose is the correct course of action, and how to derive that from what you posted. With that said, let's look at what one of those errors are telling us. [ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen[ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error[ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC }[ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED[ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error)[ 1670.547846] ata3.00: status: { DRDY }[ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED (I hope I got the parts that should go together, but you were getting a bundle of those so it should be okay either way.) The Linux ata Wiki has a page explaining how to read these errors . Particularly, A status value of DRDY means "Device ready. Normally 1, when all is OK." Seeing a status value of DRDY is perfectly normal and expected. SError has multiple component values, of which you are seeing (in this particular snippet): UnrecovData "Data integrity error occurred, interface did not recover" HostInt "Host bus adapter internal error" 10B8B "10b to 8b decoding error occurred" BadCRC "Link layer CRC error occurred" 10b8b coding, which encodes 8 bits as 10 bits to aid with both signal synchronization and error detection, is used on the physical cabling, not necessarily on the drive itself. The drive most likely uses other forms of FEC or ECC coding, and an error there would normally show up as some form of I/O error, likely with an error value of UNC ("uncorrectable error - often due to bad sectors on the disk"), likely with "media error" ("software detected a media error") in parenthesis at the end of the res line. This latter is not what you are seeing, so while we can't completely rule it out, it seems unlikely. The "link layer" is the physical cables and circuit board traces between the drive's own controller, and the disk drive interface chip (likely part of the southbridge on your computer's motherboard, but could be located at an offboard HBA). A host bus adapter, also known as a HBA, is the circuitry that connects to storage equipment. Also colloquially known as a "disk controller", a term which is a bit of a misnomer with modern systems. The most visible part of the HBA is generally the connection ports, most often these days either SATA or some SAS form factor. The UnrecovData and HostInt flags basically tell us that "something just went horribly wrong, and there was no way to recover or no attempt at recovery was made". The opposite would likely be RecovData , which indicates that a "data integrity error occurred, but the interface recovered". (As an aside, I probably would have used HBAInt instead of HostInt , as the "host" refers to the HBA, not the whole system.) The combination of 10B8B and BadCRC , which both point to the physical link layer, makes me suspect a cabling issue. This suspicion is also supported by the fact that the SMART self-tests, which are completely internal to the drive except for status reporting, are finding no errors that the manufacturer feels are serious enough to warrant reporting in the results. If the drive was having problems storing or reading data, the long SMART self-test in particular should have reported that. TL;DR: The first thing I would do is thus simply to unplug and re-plug the SATA cable at both ends; it may be slightly loose, causing it to lose electrical contact intermittently. See if that resolves the problem. It might even be worth doing this to all SATA cabling in your computer, not just the affected disk. If you are using an off-board HBA, I would also remove and re-seat that card, mainly because it's an easy thing to try while you are already messing around with the cabling. Failing that, try throwing away and replacing the SATA cable, preferably with a high-quality cable. A high-quality cable will be slightly more expensive, but I find that it's usually well worth the small extra expense if it helps avoid headaches like this. Nobody likes seeing their storage spewing errors!
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/304661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22838/" ] }
304,663
According to cve.mitre.org , the Linux kernel before 4.7 is vulnerable to “Off-path” TCP exploits Description net/ipv4/tcp_input.c in the Linux kernel before 4.7 does not properly determine the rate of challenge ACK segments, which makes it easier for man-in-the-middle attackers to hijack TCP sessions via a blind in-window attack. This vulnerability is considered as dangerous because the attacker just needs an IP address to perform an attack. Does upgrading the Linux kernel to the latest stable version, 4.7.1 , become the only way to protect my system?
According to LWN there is a mitigation which can be used while you do not have a patched kernel: there is a mitigation available in the form of the tcp_challenge_ack_limit sysctl knob. Setting that value to something enormous (e.g. 999999999 ) will make it much harder for attackers to exploit the flaw. You should set it by creating a file in /etc/sysctl.d and then implementing it with sysctl -a . Open a terminal (press Ctrl + Alt + T ), and run: sudo -iecho "# CVE-2016-5696net.ipv4.tcp_challenge_ack_limit = 999999999" > /etc/sysctl.d/security.confsysctl -aexit By the way, you can track the state of this vulnerability on Debian in the security tracker .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
304,692
Let's say I have a directory with multiple subdirectories, each of which contains some files: 1/a.txt1/b.txt2/c.txt2/d.txt3/e.txt3/f.txt I want to see the size of each file. Please keep in mind, I know there are easier and more direct ways to do this, such as du -a . I just want to know why the following doesn't work, for educational purposes. My pointless exercise Running find . returns a list of all files and directories, so I tried piping it like so: find . | xargs du but that just returns the sizes of the directories 1, 2, and 3. I'm missing some bit of understanding, because in my mind xargs should be mapping each line of output from find to a call to du . If instead I use: find . | xargs du -a then it works as expected, listing the sizes of all files and directories. It also works fine if I only pass it a list of files by using the -type f option, so it's something to do with receiving a list of directories mixed with files. What's going on here?
What's going on is that xargs puts (if it can) all of the names on one command-line, so that you see only one command passed to du . Then du ignores the filenames (as you might expect: the files are part of the directories and it does not count those twice). If you use a -n 1 parameter to xargs , it will split the command up and you will see something more as you expect. find . | xargs -n 1 du -a
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185668/" ] }
304,711
On my system (I am just running Fluxbox 1.3.7 on FreeBSD 10.3 – but I've seen that on Linux, too) Firefox has a much more narrow scrollbar than on Windows (which also behaves differently). How do I increase the width of the scrollbar? It would also be nice to get the usual behavior, i.e. clicking on the scrollbar should move the scrollbox just one step up or down and not position it where I clicked. EDIT: Installing this addon solves the problem (in a not very elegant way). The scrollbar looks ridiculous now, but at least it has a nice width. ;-)
In about:config , set widget.non-native-theme.scrollbar.size.override to the desired numeric value. This will only have an effect if widget.non-native-theme.enabled is true (the default as of this writing). Older versions may instead have the setting named widget.non-native-theme.scrollbar.size .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
304,779
For a long time, Linux hasn't bothered with file creation dates because none of the file systems it commonly used supported them. However now, 2 file systems commonly used (NTFS and ext4) both record file creation dates. The stat command, however, still outputs Birth: - on an ext4 file system, even though we can see that ext4 has stored the file's create date using debugfs -R 'stat <inode_number>' /dev/file_device . When I looked into why this is, I saw that someone else has already recently filed a bug report on it, and the response links through to an upstream issue that simply states "there is no Linux kernel interface at present to get that info [file creation date]". It seems remarkable to me that this is apparently still the case, as people have been requesting that stat display this info for years (and stat does output a Birth field, even though it apparently doesn't support it yet! Did they add it in anticipation?) So is it still true that there is no Linux kernel interface at present to get file creation date? Is there a plan to implement this ever?
EDIT: Good news, statx() has been merged so it should be available in release 4.11. https://lwn.net/Articles/716302/ https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a528d35e8bfcc521d7cb70aaf03e1bd296c8493f xstat() work, currently statx(), was revised in 2016. http://lwn.net/Articles/685791/ http://lwn.net/Articles/686106/ The process was a bit more disciplined this time (less bikeshedding, agreement to drop controversial attributes as they can always be added later). Unfortunately there were still objections to the exact interface and I haven't seen any more recent references.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8831/" ] }
304,837
When I tried to upgrade from Ubuntu 14.04 to 16.04, this is the message logs shown after sudo apt-get upgrade . Notice the last line, how do I forcefully update those 961 items? .......................................................................... .......................................................................... xserver-xorg-input-vmmouse-lts-vivid xserver-xorg-input-wacom-lts-vivid xserver-xorg-lts-vivid xserver-xorg-video-all-lts-vivid xserver-xorg-video-ati-lts-vivid xserver-xorg-video-cirrus-lts-vivid xserver-xorg-video-fbdev-lts-vivid xserver-xorg-video-intel-lts-vivid xserver-xorg-video-mach64-lts-vivid xserver-xorg-video-mga-lts-vivid xserver-xorg-video-neomagic-lts-vivid xserver-xorg-video-nouveau-lts-vivid xserver-xorg-video-openchrome-lts-vivid xserver-xorg-video-r128-lts-vivid xserver-xorg-video-radeon-lts-vivid xserver-xorg-video-savage-lts-vivid xserver-xorg-video-siliconmotion-lts-vivid xserver-xorg-video-sisusb-lts-vivid xserver-xorg-video-tdfx-lts-vivid xserver-xorg-video-trident-lts-vivid xserver-xorg-video-vesa-lts-vivid xserver-xorg-video-vmware-lts-vivid yelp zeitgeist-core zenity zenity-common 0 upgraded, 0 newly installed, 0 to remove and 961 not upgraded.
apt-get upgrade plays it safe: it upgrades all the packages that can be upgraded without breaking other packages. If upgrading package A requires uninstalling package B, apt-get upgrade won't do it, and A ends up in the “not upgraded” list. Over time, packages get broken into pieces, joined together, renamed, etc. In addition to basic dependencies (A requires B), a package C can declare that it “replaces” a package B, indicating that when C is installed, it should be ok to uninstall B. Apt also has a concept of automatically-installed vs manually-installed package; it should be ok to remove an automatically-installed package even if it isn't explicitly getting replaced, whereas manually-installed packages are requested by the user and should stay. But apt-get upgrade doesn't take any risk. There is another command apt-get dist-upgrade which is willing to remove packages if necessary. The idea is that apt-get upgrade only upgrades individual packages, whereas apt-get dist-upgrade upgrades the whole distribution. apt-get upgrade is low-risk and you can pretty much do it without paying attention, whereas apt-get dist-upgrade might occasionally remove a program that you rely on, especially if you haven't taken care to mark all the packages you absolutely need as manually installed. You're unlikely to end up with a broken system after apt-get dist-upgrade , but sometimes you might need to reinstall a package or two. Aptitude has the same command duality, but has introduced prefered synonyms: safe-upgrade = upgrade vs. full-upgrade = dist-upgrade . In addition, Ubuntu provides a program called do-release-upgrade which is the recommended way to upgrade from one Ubuntu release to the next (or from one Ubuntu LTS to the next). This program runs apt-get dist-upgrade under the hood, but makes some checks and preparations first and performs some cleanup afterwards. In summary: If upgrading between Ubuntu releases, use do-release-upgrade . If you're just installing security updates and bug fixes, use apt-get update followed by apt-get upgrade (or aptitude safe-upgrade ). Otherwise use apt-get update followed by apt-get dist-upgrade (or aptitude full-upgrade ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144663/" ] }
304,895
As for ./script.sh arg1 [arg2 arg3 ...] , the command line arguments arg1 , arg2 , ... can be got by $1 , $2 , ... But the number of arguments is NOT fixed. In the shell script, I want to pass the arguments starting from arg2 to a program, #/bin/bash.../path/to/a/program [I want to pass arg2 arg3 ... to the program]... How could I do it since there could be one or more arguments?
The usual way would be to save a copy of arg1 ( "$1" ) and shift the parameters by one, so you can refer to the whole list as "$@" : #!/bin/sharg1="$1"shift 1/path/to/a/program "$@" bash has some array support of course, but it is not needed for the question as posed. If even arg1 is optional, you would check for it like this: if [ $# != 0 ]then arg1="$1" shift 1fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67765/" ] }
304,896
I want to know what program calls a particular executable, including when that executable is used as an interpreter via a shebang line. This is not quite the same problem as knowing what program accesses a particular file . For example, auditctl -w /usr/bin/myprogram tells me that the program is being executed by… itself, since the audit event is generated after the successful execve call. One option is to replace the executable by a wrapper program, like this… #!/bin/shlogger "$0: executed by uid=$(id -u) ruid=$(id -ur) cmd=$(ps -o args= -p $PPID)"exec "$0.real" "$@" But this requires moving the actual file, which is disruptive (the file can't be read-only, it clashes with modifications made by a package manager, etc.). And it doesn't work if the program is used as an interpreter for a script, because shebang doesn't nest. (In that case, auditctl -w /usr/bin/interpreter does give a useful result, but I want a solution that works for both cases.) It also doesn't work for setuid programs if /bin/sh is bash since bash drops privileges. How can I monitor executions of a particular executable including uses of the executable as a shebang interpreter, and in particular log useful information about the calling process (not just the PPID but at least the process name or the parent executable path, ideally also the invoking user and arguments)? Preferably without replacing the file with a wrapper. A Linux-specific solution is fine.
The usual way would be to save a copy of arg1 ( "$1" ) and shift the parameters by one, so you can refer to the whole list as "$@" : #!/bin/sharg1="$1"shift 1/path/to/a/program "$@" bash has some array support of course, but it is not needed for the question as posed. If even arg1 is optional, you would check for it like this: if [ $# != 0 ]then arg1="$1" shift 1fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
304,977
Say I have these two input files: > file1234> file2101001000 And I would like to compute: file1/file2 for each line, resulting in file3: > file 30.20.030.004 Divisions in bash can be achieved by: $((file1_line/file2_line))
A paste and bc combination is a good choice for simple arithmetic: paste -d/ file1 file2 | bc -l Output: .2000000000.0300000000.0040000000 A more advanced example With some trickery you can get a bit more complicated. Say file3 contains: 678 You can do (file1 + file3) / file2 like this: paste -d'(+)/' /dev/null file1 file3 /dev/null file2 Output: (2+6)/10(3+7)/100(4+8)/1000 This works because paste cycles through its delimiter list for each line. React to divide-by-zero Illigal operations sent to bc result in a warning being sent to standard error. You could redirect these to a different file and decide program flow based on its content, e.g.: paste -d/ file1 file2 | bc -l > resultfile 2> errorfileif grep -q 'Divide by zero' errorfile; then echo "Error in calculation"else echo "All is well"fi Or if there was any error: paste -d/ file1 file2 | bc -l > resultfile 2> errorfileif ! file errorfile | grep -q empty; then echo "Error in calculation"else echo "All is well"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/304977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74555/" ] }
304,981
I'm trying to take a screenshot of Firefox running on Unity/Ubuntu 16.04 inside VirtualBox 5.1.2. Pressing Prt Scrn pops open a little app called Screenshot but the screenshot shown is only ever the desktop, with no application windows visible. My host operating system is Windows 10 Pro. Does anyone know how to resolve this and get screenshot functionality working inside VirtualBox?
You may take a screenshot with VirtualBox's built-in screenshot capability, either through pressing RightCtrl + E (or LeftCmd + E on Mac), which will open up a save file window on the host), or with $ VBoxManage controlvm vmname screenshotpng screenshot.png from the command line of the host, where vmname is the name of the virtual machine and screenshot.png is the name of the PNG image you'd like to create.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/304981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185898/" ] }
305,002
I have a problem with running mysql server. I've just installed FreeBSD 10.3 and I want to run here MySQL server, but process doesn't starts. Here are all commands i gave after install FreeBSD, step-by-step: portsnap fetch extractpkg updatepkg install mysql57-server /* Here mysql says about .mysql_secret file with password to root, but it's not generating at all. I can use but there is no result... */ find / -iname .mysql_secret When I try to first run MySQL using this command: mysqld_safe --initialize --user=mysql I get this one: mysqld_safe Logging to '/var/db/mysql/host.err'mysqld_safe Starting mysqld deamon with databases from /var/db/mysqlmysqld_safe mysqld from pid file /var/db/mysql/host.pid ended Here you are /var/db/mysql/host.err 2016-08-22T11:56:27.6NZ mysqld_safe Starting mysqld daemon with databases from /var/db/mysql2016-08-22T11:56:27.533572Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.2016-08-22T11:56:27.533635Z 0 [ERROR] Aborting2016-08-22T11:56:27.6NZ mysqld_safe mysqld from pid file /var/db/mysql/host.pid ended I found something simmilar: https://forums.freebsd.org/threads/56275/ https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=209512 There is still no solution. Any ideas? I really need MySQL. I have tried with MySQL 5.6 too. Same problem... At the end /usr/local/etc/mysql/my.cnf # $FreeBSD: branches/2016Q3/databases/mysql57-server/files/my.cnf.sample.in 414707 2016-05-06 14:39:59Z riggs $[client]port = 3306socket = /tmp/mysql.sock[mysql]prompt = \u@\h [\d]>\_no_auto_rehash[mysqld]user = mysqlport = 3306socket = /tmp/mysql.sockbind-address = 127.0.0.1basedir = /usr/localdatadir = /var/db/mysqltmpdir = /var/db/mysql_tmpdirslave-load-tmpdir = /var/db/mysql_tmpdirsecure-file-priv = /var/db/mysql_securelog-bin = mysql-binlog-output = TABLEmaster-info-repository = TABLErelay-log-info-repository = TABLErelay-log-recovery = 1slow-query-log = 1server-id = 1sync_binlog = 1sync_relay_log = 1binlog_cache_size = 16Mexpire_logs_days = 30default_password_lifetime = 0enforce-gtid-consistency = 1gtid-mode = ONsafe-user-create = 1lower_case_table_names = 1explicit-defaults-for-timestamp = 1myisam-recover-options = BACKUP,FORCEopen_files_limit = 32768table_open_cache = 16384table_definition_cache = 8192net_retry_count = 16384key_buffer_size = 256Mmax_allowed_packet = 64Mquery_cache_type = 0query_cache_size = 0long_query_time = 0.5innodb_buffer_pool_size = 1Ginnodb_data_home_dir = /var/db/mysqlinnodb_log_group_home_dir = /var/db/mysqlinnodb_data_file_path = ibdata1:128M:autoextendinnodb_temp_data_file_path = ibtmp1:128M:autoextendinnodb_flush_method = O_DIRECTinnodb_log_file_size = 256Minnodb_log_buffer_size = 16Minnodb_write_io_threads = 8innodb_read_io_threads = 8innodb_autoinc_lock_mode = 2skip-symbolic-links[mysqldump]max_allowed_packet = 256Mquote_namesquick
You may take a screenshot with VirtualBox's built-in screenshot capability, either through pressing RightCtrl + E (or LeftCmd + E on Mac), which will open up a save file window on the host), or with $ VBoxManage controlvm vmname screenshotpng screenshot.png from the command line of the host, where vmname is the name of the virtual machine and screenshot.png is the name of the PNG image you'd like to create.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185915/" ] }
305,017
I just came across the advice that if you want to get rid of a large file and a process has the file handle open you should copy it to /dev/null and its size will be reduce to zero. How does this work? Or does this even work? After a quick search I found conflicting answers ranging from "Yes, this totally works" to " Your whole machine is going to blow up". Can somebody enlighten me? I found the question here: https://syedali.net/engineer-interview-questions/
You misread the advice, the idea is not to copy the large file to /dev/null , which wouldn't affect it in any way, outside putting it in cache if space is available. cp bigfile /dev/null # useless The advice is not to remove the file then copy /dev/null to it, as it would keep the original inode unchanged and won't free any disk space as long as processes have that file open. The advice is to replace the file content with /dev/null one, which, given the fact /dev/null size is zero by design, is effectively truncating the file to zero bytes: cp /dev/null bigfile # workscat /dev/null > bigfile # works It might be noted that if you use a shell to run these commands, there is no need to use /dev/null , a simple empty redirection will have the same effect and would be more efficient, cat /dev/null being a no-op anyway. : > bigfile # better> bigfile # even better if the shell used supports it
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185941/" ] }
305,039
I have a bash script that looks like the following: ##script#!/bin/bashrm data*rm logfile*for i in {1..30}do## append a & if you want to run it parallel;nohup Rscript --vanilla main.R 10 100 $i &> logfile"$i" &done I would like to create another for loop after the first one to continue for another 30. For example ##script#!/bin/bashrm data*rm logfile*for i in {1..30}do## append a & if you want to run it parallel;nohup Rscript --vanilla main.R 10 100 $i &> logfile"$i" &for i in {31..60}do## append a & if you want to run it parallel;nohup Rscript --vanilla main.R 10 100 $i &> logfile"$i" &done I would like for the first set of jobs to finish before starting the new set. But because of the nohup it seems that they are all run simultaneously. I have nohup because I remotely login to my server and start the jobs there and then close my bash. Is there an alternative solution?
You'll want to use the wait command to do this for you. You can either capture all of the children process IDs and wait for them specifically, or if they are the only background processes your script is creating, you can just call wait without an argument. For example: #!/bin/bash# run two processes in the background and wait for them to finishnohup sleep 3 &nohup sleep 10 &echo "This will wait until both are done"datewaitdateecho "Done"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92825/" ] }
305,055
I have a directory on an nfs mount, which on the server is at /home/myname/.rubies Root cannot access this directory: [mitchell.usher@server ~]$ stat /home/mitchell.usher/.rubies File: `/home/mitchell.usher/.rubies' Size: 4096 Blocks: 8 IO Block: 32768 directoryDevice: 15h/21d Inode: 245910 Links: 3Access: (0755/drwxr-xr-x) Uid: ( 970/mitchell.usher) Gid: ( 100/ users)Access: 2016-08-22 15:06:15.000000000 +0000Modify: 2016-08-22 14:55:00.000000000 +0000Change: 2016-08-22 14:55:00.000000000 +0000[mitchell.usher@server ~]$ sudo !!sudo stat /home/mitchell.usher/.rubiesstat: cannot stat `/home/mitchell.usher/.rubies': Permission denied I am attempting to copy something from within that directory to /opt which only root has access to: [mitchell.usher@server ~]$ cp .rubies/ruby-2.1.3/ -r /optcp: cannot create directory `/opt/ruby-2.1.3': Permission denied[mitchell.usher@server ~]$ sudo !!sudo cp .rubies/ruby-2.1.3/ -r /optcp: cannot stat `.rubies/ruby-2.1.3/': Permission denied Obviously I can do the following (and is what I've done for the time being): [mitchell.usher@server ~]$ cp -r .rubies/ruby-2.1.3/ /tmp/[mitchell.usher@server ~]$ sudo cp -r /tmp/ruby-2.1.3/ /opt/ Is there any way to do this that wouldn't involve copying it as an intermediary step or changing permissions?
You can use tar as a buffer process cd .rubiestar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - ) The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/305055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
305,057
I would like to update and install some software on a Red Hat machine but have no subscription and don't plan on getting one. To get Wine I'm following this tutorial . After doing yum groupinstall 'Development Tools' I get: Loaded plugins: langpacks, product-id, subscription-managerThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.There is no installed groups file.Maybe run: yum groups mark convert (see man yum)Warning: group Development Tools does not exist.Maybe run: yum groups mark install (see man yum)No packages in any requested group available to install or update
You can use tar as a buffer process cd .rubiestar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - ) The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/305057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185966/" ] }
305,061
Result of dig -6 google.com : ; <<>> DiG 9.8.3-P1 <<>> -6 google.com;; global options: +cmd;; connection timed out; no servers could be reached What does it means if dig -4 google.com works correctly? Does it mean that my provider doesn't support IPv6 ? Update My /etc/resolv.conf ## Mac OS X Notice## This file is not used by the host name and address resolution# or the DNS query routing mechanisms used by most processes on# this Mac OS X system.## This file is automatically generated.#nameserver 192.168.88.1 192.168.88.1 is my router
-4 / -6 tells dig to only use IPv4/IPv6 connectivity to carry your query to the nameserver - it doesn't change whether to query for A records(IPv4) or AAAA records(IPv6) if that's what you intended. If dig -4 works but dig -6 doesn't, it just means that your local nameserver can't be reached via IPv6, which can have various reasons. Sure, not having IPv6 connectivity is among them but it's unfortunately also common for some specific home routers to not act as a DNS forwarder on IPv6. They don't strictly need to, since your machine can use IPv4 to query for AAAA records. If you want to quickly check if you can reach google.com via IPv6, you could do ping6 google.com
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48776/" ] }
305,101
Maybe a broad question, but I would like to know if is there any difference between those distros regarding security and privacy. For instance, I was not able to find an official privacy policy for linux mint and I read ubuntu shares dash typed info with third-parties if not opt-out. Which respects user privacy more? Are both source codes fully and publicly available for scrutiny? Are rolling release distros safer? Thanks in advance.
-4 / -6 tells dig to only use IPv4/IPv6 connectivity to carry your query to the nameserver - it doesn't change whether to query for A records(IPv4) or AAAA records(IPv6) if that's what you intended. If dig -4 works but dig -6 doesn't, it just means that your local nameserver can't be reached via IPv6, which can have various reasons. Sure, not having IPv6 connectivity is among them but it's unfortunately also common for some specific home routers to not act as a DNS forwarder on IPv6. They don't strictly need to, since your machine can use IPv4 to query for AAAA records. If you want to quickly check if you can reach google.com via IPv6, you could do ping6 google.com
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186001/" ] }
305,124
Some usernames on FreeBSD start with an underscore: _dhcp:*:65:65:dhcp programs:/var/empty:/usr/sbin/nologin but others do not: www:*:80:80:World Wide Web Owner:/nonexistent:/usr/sbin/nologin What's the significance of this underscore? Is it purely historical or does it serve a practical purpose? Some more examples can be seen in the FreeBSD ports/UIDs file.
There may be more than one case, but the one you point out was discussed in a mailing-list thread ISC DHCP Server port UID/GID question in 2008, where the _dhcp user was known to be a special account (with different privileges from the daemon): I noticed that, but I believe that that is a privilege separation account that is used with the OpenBSD-version of the dhclient. Also, as I pointed out, if this is usable, then why isn't the isc-dhcp-server port using it instead of allocating a UID/GID for itself during the install?ErikFlorent Thoumie wrote:> On Jan 18, 2008 12:01 PM, Erik Van Benschoten <evanben at valleycomnet.com> wrote:>> Greetings,>>>> Is there a specific reason that the port of the ISC's DHCP server does>> not seem to have/use a registered UID/GID?> > Maybe because there's already _dhcp user (uid 65) in base? Checking my FreeBSD 10 machine, I see another account, this one labeled clearly enough: _pflogd:*:64:64:pflogd privsep user:/var/empty:/usr/sbin/nologin Further reading: Privilege Separated OpenSSH OpenBSD 3.6 ChangeLog Have dhclient(8) fall back to user nobody if user _dhcp doesn't exist. Helps with upgrades. New _dhcp user and group for, funnily enough, the DHCP programs. "user/group _pflogd:_pflogd" what's with the _ ? (freebsd-current mailing list, 2004) pf not logging on 5.3-BETA3 ? (freebsd-pf mailing list, 2004) Okay, have you guys read UPDATING? > 20040623: > pf was updated to OpenBSD-stable 3.5 and pflogd(8) is privilege > separated now. It uses the newly created "_pflogd" user/group > combination. If you plan to use pflogd(8) make sure to run > mergemaster -p or install the "_pflogd" user and group manually.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186019/" ] }
305,172
My sound and wireless hardware are not working under my current 3.16.x kernel on my Debian 8 system. I performed: apt-cache search linux-image with the intention of getting the 4.x version linux kernel to try to fix this (as the hardware works fine under Ubuntu 16.04). However it seems the choice of kernel is limited to: linux-image-3.16.0-4-amd64 - Linux 3.16 for 64-bit PCs I would like to install the 4.x version and have the option to switch between the current kernel and the 4.x version. How can I do this using apt-get or a simple way that does not require manual compilation?
Add something like deb http://mirror.one.com/debian/ jessie-backports main contrib non-free to your sources.list . To install the 4.6 kernel, run: apt-get update apt-get install -t jessie-backports linux-image linux-image-amd64 It might depend on a few other things that can also be found in backports, you might have to add those packages names to the command line explicitly. Apt will automatically track the versions in backports for the packages you install from backports, and not install anything from there unless you explicitly ask for them. And after reading the entire question: It should be possible to leave the old kernel installed, and then grub should be configured to offer you a choice.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177246/" ] }
305,178
I am planning to use my enterprise email address ( I am working at a company) as a medium for commands. And I want this to work on a linux box, I need by email to be registered there so that if an email comes it will be registered. Example: I will send an email with a subject: getsummary then at a linux box(linux os) received an email with that subject, I want it to execute a perl script I made to send a summary back to the requestor.
Add something like deb http://mirror.one.com/debian/ jessie-backports main contrib non-free to your sources.list . To install the 4.6 kernel, run: apt-get update apt-get install -t jessie-backports linux-image linux-image-amd64 It might depend on a few other things that can also be found in backports, you might have to add those packages names to the command line explicitly. Apt will automatically track the versions in backports for the packages you install from backports, and not install anything from there unless you explicitly ask for them. And after reading the entire question: It should be possible to leave the old kernel installed, and then grub should be configured to offer you a choice.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108197/" ] }
305,241
I have a file that looks something like this: ID101 G T freq=.5 nonetype ANC=.1 addinforID102 A T freq=.3 ANC=.01 addinforID102 A T freq=.01 type=1 ALT=0.022 ANC=.02 addinfor As you can see, each line has a slightly different number of columns. I specifically want column 1, column 2, column 3, column 4 and the column that starts with ANC= Desired output: ID101 G T freq=.5 ANC=.1ID102 A T freq=.3 ANC=.01ID102 A T freq=.01 ANC=.02 I generally use the an awk command to parse files: awk 'BEGIN {OFS = "\t"} {print $1, $2, $3, $4}' Is there an easy way to alter this command to work for situations like this? I think something like this might work: awk '{for(j=1;j<=NF;j++){if($j~/^ANC=/){print $j}}}' However, how can I edit this to also print the first columns?
With awk : awk '{for(i=5;i<=NF;i++){if($i~/^ANC=/){a=$i}} print $1,$2,$3,$4,a}' file for(...) loops through all fields, starting with field 5 ( i=5 ). if($i~/^ANC=/) checks if the field starts with ANC= a=$i if yes, set variable a to that value print $1,$2,$3,$4,a print fields 1-4 followed by whatever is stored in a . Can be combined with BEGIN {OFS="\t"} of course.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121711/" ] }
305,283
Here is a small sample of the files I need to rename: ID_EDMSCP_20160815.txt.pgp ID_EDMSCP_20160816.txt.pgp ID_EDMSCP_20160817.txt.pgp ID_EDMSCP_20160818.txt.pgp I have created a script from someone else's question as follows: for file in ID_EDMSCP_*.txt.pgp do mv -i "${file}" "${file/-ID-/-SUBMIT_GO_ID-}" done However, I am getting the following: mv: `ID_EDMSCP_20160815.txt.pgp' and `ID_EDMSCP_20160815.txt.pgp' are the same file And nothing is being renamed. Am I missing something?
mv -i "${file}" "${file/-ID-/-SUBMIT_GO_ID-}" That string replacement tells it to replace -ID- , which doesn't occur in your source file names. I think you mean to say this instead: mv -i "${file}" "${file/ID_/SUBMIT_GO_ID_}" Another problem you might be having is that this is a ksh93 feature, also available in Bash and Zsh, but not in pure POSIX shells . If you're using #!/bin/sh at the top of the script, it will run under either a POSIX shell or a pre-POSIX Bourne shell interpreter on some systems , which won't understand that. There are a bunch of alternatives if you cannot use this nonstandard feature of advanced shells: sed If you need this script to be portable to all Unix type systems, you could switch to sed for the replacement: mv -i "$file" "$(echo $file | sed -e 's/^ID_/SUBMIT_GO_ID_/')" That still requires at least a POSIX shell, due to the use of $() style command interpolation, but that's an easier "ask" than to expect an advanced shell everywhere. If your script had to run on every Bourne type shell ever, you could switch to backtick style command interpolation, which can be tricky to get right if you have embedded spaces in the file names, but that doesn't seem to be the case here, according to your posted example. So, this should be equivalent to the prior option for your purposes: mv -i "$file" `echo $file | sed -e s/^ID_/SUBMIT_GO_ID_/` mmv Another option is to install mmv , a program that doesn't ship installed on any Unix type system I know of, but which is usually packaged for it in the standard package repository. With mmv , you can replace your whole shell script loop with $ mmv 'ID_*' 'SUBMIT_GO_ID_#1' Note that there is no -i flag because mmv doesn't support asking for every rename. It does, however, pre-check all planned renames and try to detect problems before beginning work, so it may be close enough for your purposes. rename Still another option is the rename command, of which there are at least three variants in the wild. There are two forks of Larry Wall's Perl script of that name, one of which may be installed on your system already as part of your Perl distribution. Then there is the util-linux variant, often either installed instead of that version on Linux machines, or beside it. If you have both installed, the Perl-based one is probably called prename to avoid a conflict. The main advantage of the Perl-based ones is that you can use Perl regular expressions , which are more powerful than the glob expressions supported by mmv or the POSIX regular expressions supported by sed . It doesn't matter much in this particular instance, however, because the task is trivial: $ rename '^ID_' SUBMIT_GO_ID_ ID_EDMSCP_*.txt.pgp The util-linux version is much less capable because although it calls its first parameter the "expression", it is neither a POSIX regular expression nor a Perl regular expression. It isn't even a glob expression: it's just a literal string. But as above, that limitation doesn't actually hamper us in this particular case: $ rename ID_ SUBMIT_GO_ID_ ID_EDMSCP_*.txt.pgp As with mmv , rename doesn't support the -i flag. Plain Old Bourne Shell All of that having been said, your particular case allows a much simpler solution because your input pattern appears at the tail end of the output pattern, so you can do it all in portable Bourne shell syntax: for file in ID_EDMSCP_*.txt.pgpdo mv -i "$file" SUBMIT_GO_"$file"done I've left the complicated alternatives above because they are often necessary, since the internal mechanisms within the shell language are not always sufficient.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186179/" ] }
305,308
There's a binary that I need to run which uses bind with a port argument of zero, to get a random free port from the system. Is there a way I can constrain the range of ports the kernel is allowed to pick from?
on Linux, you'd do something like sudo sysctl -w net.ipv4.ip_local_port_range="60000 61000" instruction for changing ephemeral port range on other unices can be found for example at http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50614/" ] }
305,345
Currently I have Arch and Windows with grub installed and configured. I'm going to make another Arch installation on a separate partition. Do I need to install and configure grub again on the newly installed distribution or I can use the old one? I suppose that if I continue using the old (current from this point of view) grub I'd have to configure it again so that it sees the new Arch installation. What will happen if I format the current partition (with the old Arch installation)? Will grub continue working or not (i.e. I'd need to boot some live-cd to fix it)? To sum up: is grub installed on some general place independently from any OS, or is it tied to some (my current Arch installation). Tutorials give this command: grub-mkconfig -o /boot/grub/grub.cfg which makes me think that grub is tied to the specific linux installation; but they also show a grub-install command without specifying any directory. And if grub was tied to the current installation, how would my computer know which partition to check for grub? Otherwise, if it was "general", why would I have to "install" it as a package on the specific arch installation?
Naming convention: GRUB (some of it) stays in the MBR. GRUB (rest of it) are several files that are loaded, from /boot/grub (for example: that nice image that appears as a background in GRUB is not stored on the MBR) Notes: The answer is considering an MBR setup, GRUB can be used in other setups. In an EFI setup things get hairy, GRUB can be used, but so can be the kernel itself as its own EFI stub. GRUB (some of it) is installed in the MBR. The MBR are the first 512 bytes on a disk. The MBR is also used by the partition table of the disk, therefore GRUB itself has somewhat less space than the 512 bytes. The GRUB (some of it) inside the MBR loads a more complete GRUB (rest of it) from another part of the disk, which is defined during GRUB installation to the MBR ( grub-install ). Since the MBR GRUB needs to find its counterpart on disk, which normally resides on /boot , the partition where the main GRUB resides cannot be too far from the partition table (often 512MB but may vary). It is very useful to have /boot as its own partition, since then GRUB for the entire disk can be managed from there. What does it mean: The GRUB on MBR can only load one GRUB (the rest of it) from disk. That specific GRUB (the rest of it) on disk must be configured to find all OSes on the machine. The command grub-mkconfig -o /boot/grub/grub.cfg runs os-prober (if it can find it) which scans all partitions and produces a grub.cfg pointing to all the OSes. Therefore if you have several partitions with /boot (or the MS windows equivalents, I do not know them but os-prober knows) the os-prober will find them and create grub.cfg accordingly. Running grub-install install a GRUB (some of it) on the MBR that points to the GRUB of the current running OS with the current grub.cfg . What does this mean: You only need a single GRUB for the entire system. You can have different GRUBs on different disks (since they have distinct MBRs) but that only makes sense if you plan to remove the disk. You can manage the boot of all OSes from a single GRUB installation. On a single disk you shall always run grub-install from a single OS only! That's important, otherwise you will keep overwriting your config.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136690/" ] }
305,358
In Advanced Bash-Scripting Guide , in example 27-4 , 7-th line from the bottom, I've read this: A function runs as a sub-process. I did a test in Bash, and it seems that the above statement is wrong. Searches on this site, Bash Man, and my search engine don't bring any light. Do you have the answer and would like to explain?
The Advanced Bash-Scripting Guide is not always reliable and its example scripts contain out-dated practices such as using the effectively deprecated backticks for command substitution, i.e., `command` rather than $(command) . In this particular case, it’s blatantly incorrect. The section on Shell Functions in the (canonical) Bash manual definitively states that Shell functions are executed in the current shell context; no new process is created to interpret them.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/305358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
305,415
I have vim installed and it is configured with +python3/dyn . If I run vim --noplugin I can use py3 . However with my plugins enabled, I get the following error: E837: This Vim cannot execute :py3 after using :pythonE263: Sorry, this command is disabled, the Python library could not be loaded. I suspect that one of the plugins loads python2 and therefore defines which python version is being used ( similar to this vim-bootstrap issue ). The problem is I don't know which. How can I use python3 with my vim version? Plungins Installed The following plugins are installed with Vundle: 60 " let Vundle manage Vundle, required 61 Plugin 'gmarik/Vundle.vim' 62 "Bundle 'Valloric/YouCompleteMe' 63 Plugin 'tmhedberg/SimpylFold' 64 Plugin 'vim-scripts/indentpython.vim' 65 Plugin 'scrooloose/syntastic' 66 Plugin 'scrooloose/nerdtree' 67 Plugin 'jistr/vim-nerdtree-tabs' 68 Plugin 'kien/ctrlp.vim' 69 Plugin 'tpope/vim-fugitive' 70 Plugin 'nvie/vim-flake8' 71 "Plugin 'Lokaltog/powerline', {'rtp': 'powerline/bindings/vim/'} 72 Plugin 'vim-airline/vim-airline' 73 Plugin 'vim-airline/vim-airline-themes' 74 Bundle 'klen/python-mode' 75 Plugin 'jmcantrell/vim-virtualenv' Trying to make Python3 the default/preferred version I have tried to make python3 the preferred alternative by running: alternatives --install /usr/bin/python python /usr/bin/python3.5 2alternatives --install /usr/bin/python python /usr/bin/python2.7 1 vim (or one of the plugins) still uses python2.7 and I get the same errors as above. My vim version VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Jun 2 2016 10:02:17)Included patches: 1-1868Modified by <[email protected]>Compiled by <[email protected]>Huge version without GUI.
The issue is that simply executing has('python') in an if-statement causes python3 to be unavailable when vim was compiled with both python/dyn and python3/dyn. The simplest solution is probably just to add something like if exists('py2') && has('python')elseif has('python3')endif to your .vimrc before Vundle loads anything. Then, if you ever need to use python 2 instead you can just start vim with vim --cmd 'let py2 = 1' . Alternatively, I looked through your plugins and managed to find 3 which do has('python') before has('python3') : YouCompleteMe: I know you have this commented out, but it's how I stumbled across this question so it may lead someone else here. On line 36 of YouCompleteMe/plugin/youcompleteme.vim, make python3 get checked for first: elseif !has( 'python3' ) && !has( 'python' ) . python-mode: If you look in python-mode/plugin/pymode.vim, around line 275 there's the "has" if-statement, you'll notice you can actually set a global variable g:_uspy to force a certain version to be used. So, either put g:_uspy = ':py3' in your .vimrc before the Vundle stuff or edit the if-statement in pymode.vim. vim-virtualenv: Same deal as YCM, reverse the conditions of the if-statement on line 10 of vim-virtualenv/plugin/virtualenv.vim. Of course, you really only need to fix the first one that's loaded if you do it this way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115315/" ] }
305,443
I want to start learning assembly language, but all the googling didn't make any sense. I got some Exec format error and even used wine which is not good for understanding. So I wonder if anyone can tell what command line assembler will do on x86-64 architecture and probably some hello world example for Linux?
There are quite a few assemblers available, including: gas (part of binutils , and supported by GCC) — this is available everywhere, and uses AT&T style; NASM (look for a nasm package in your distribution) — this supports Intel-style mnemonics; Yasm which is a rewrite of NASM (look for a yasm package); fasm , the flat assembler. Here's a “Hello world” for gas : .global _start .text_start: mov $1, %rax mov $1, %rdi mov $hello, %rsi mov $13, %rdx syscall mov $60, %rax xor %rdi, %rdi syscallhello: .ascii "Hello, world\n" Save this to hello.S , and build it using gcc -c hello.S && ld -o hello hello.o . The equivalent for NASM is: section .text global _start_start: mov rax, 1 mov rdi, 1 mov rsi, hello mov rdx, len syscall mov rax, 60 xor rdi, rdi syscall hello db "Hello, world",0x0A len equ $ - hello Save this as hello.asm , and build it using nasm -felf64 hello.asm && ld -o hello hello.o .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176344/" ] }
305,481
Some of my jobs are getting killed by the os for some reason. I need to investigate why this is happening. The jobs that I run don't show any error messages in their own logs, which probably indicates os killed them. Nobody else has access to the server. I'm aware of OOM killer, are there any other process killers? Where would I find logs for these things?
oom is currently the only thing that kills automatically. dmesg and /var/log/messages should show oom kills. If the process can handle that signal, it could log at least the kill. Normally memory hogs get killed. Perhaps more swap space can help you, if the memory is only getting allocated but is not really needed. Else: Get more RAM.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186334/" ] }
305,487
My system is Novell SLES 11.4 x86-64. I was presented with: The ldd command must be disabled unless it protects against the execution of untrusted files. If you google the above you can easily obtain more information about it. The oem setting of /usr/bin/ldd is root.root with permissions 0755. It was also stated "An acceptable method of disabling 'ldd' is changing its mode to 0000" If I do this and disable ldd completely then my system does not work. The first ramification I immediately found was YAST doesn't work. So the first compromise was to do chmod 644 /usr/bin/ldd while leaving the file owned by root.root . This still causes problems and errors when running legitimate software (where a lot of money is spent on licensing). So I have since concluded this request is either antiquated or just bad. Looking for thoughts & suggestions, thanks.
oom is currently the only thing that kills automatically. dmesg and /var/log/messages should show oom kills. If the process can handle that signal, it could log at least the kill. Normally memory hogs get killed. Perhaps more swap space can help you, if the memory is only getting allocated but is not really needed. Else: Get more RAM.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154426/" ] }
305,524
I was wondering if it is possible to keep a file containing history per current working directory. So for example if I was working in /user/temp/1/ and typed a few commands, these commands would be saved in /user/temp/1/.his or something. I am using bash.
IMPORTANT : As pointed out by Gilles 'SO- stop being evil' . The approach below can expose your shell to attacks and leak information unintendedly. Proceed with care. Building off the answer provided by Groggle , you can create an alternative cd (ch) in your ~/.bash_profile like. function ch () { cd "$@" export HISTFILE="$(pwd)/.bash_history"} automatically exporting the new HISTFILE value each time ch is called. The default behavior in bash only updates your history when you end a terminal session, so this alone will simply save the history to a .bash_history file in whichever folder you happen to end your session from. A solution is mentioned in this post and detailed on this page , allowing you to update your HISTFILE in realtime. A complete solution consists of adding two more lines to your ~/.bash_profile, shopt -s histappendPROMPT_COMMAND="history -a;$PROMPT_COMMAND" changing the history mode to append with the first line and then configuring the history command to run at each prompt.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28951/" ] }
305,549
Let’s say you started a new application in Linux (like a text editor, etc.), but you forgot to use the “&”. What command(s) would you use in order to make that process run in the background, while NOT having to CLOSE that application? In this way, you can have both processes open and separately working (e.g., the command line terminal you used to create the process and the process such as a text editor still running.?
In the terminal window you would typically type Control + Z to "suspend" the process and then use the bg command to "background" it. eg with a sleep command $ /bin/sleep 1000^Z[1] + Stopped /bin/sleep 1000$ bg[1] /bin/sleep 1000&$ jobs[1] + Running /bin/sleep 1000$ We can see the process is running and I still have my command line.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/305549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186383/" ] }
305,563
with cat , I use the -A flag and I can't find what these characters mean anywhere. For example: cat /proc/cpuinfo > output cat -A output One of the lines is this: processor^I: 7$ I know the $ means new line, but what does ^I mean? What does ^@ mean? I'm trying to figure out what type of white space cpuinfo spits out so I can strip them in my C program, but I'm having a difficult time doing that.
^I and ^@ use the common “caret” notation for control characters . ^I means the ASCII character control-I, i.e. character 9, which is a tab. ^@ means the ASCII character control-@, i.e. character 0, which in C is the string end character. The general form is ^ c where c is an uppercase letter or one of @[\]^_ , representing the byte whose value is that of c minus 64; and ^? representing the byte value 127 (which is the byte value of ? plus 64). There's another, far less standard notation used by cat -A : non-ASCII bytes (i.e. byte values 128 and above) are shown as M- followed by the representation of the byte whose value is 128 by less (i.e. the byte value with the upper bit flipped). cat -A isn't the best way to understand visually ambiguous output. A hexadecimal transcript gives you more precise information, e.g. od -t x1 /proc/cpuinfohd /proc/cpuinfo But from a C program you can just use scanf to parse the information. All ASCII whitespace is whitespace to scanf , and with files in /proc you know that the format will be valid.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66600/" ] }
305,588
It' hard to isolate the CPU, I know, but the errors I'm seeing suggest that's the issue. This is definitely not a malfunctioning/broken hardware problem . I've been running Windows 10 all day for the past several days and this thing is flippin' fast! No crashing. More importantly, I ran Windows memory checker. Memory is all good. machine specs The machine is a brand new Lenovo Yoga 710 15" x64Intel i7-6500 CPU @ 2.50 GHz, 2601 Mhz, 2 Cores, 4 Logical ProcessorsSMBIOS Version 2.8BIOS Mode UEFI16.0 GB DDR4 Ram256 MB SSD isolating to linux kernel (?) I've seen the same problems on both archlinux-2016.08.01-dual.iso ubuntu-gnome-16.04.1-desktop-amd64.iso For Arch -- the problem was only appearing intermittently at boot from the USB stick. I managed to get Arch installed on a 100GB ext4 partition on the drive. That install has the same issue intermittently (like 90% of the time) during boot. If I get passed the boot, then the issue appears at random after the first couple of terminal commands I execute, eventually causing a complete deadlock. For Ubuntu -- the USB stick doesn't even boot. I get stopped by these same errors immediately. Deadlock... So many errors... The journal is stuffed with memory-related errors whenever this happens, but the key errors I'm seeing are: General protection fault 0000[#1] PREEMPT SMP RIP kmem_cache_alloc RIP kmem_cache_alloc_trace I've seen some of the same stack traces several times for these errors: rbt_memtype_copy_nth_elementon_each_cpuflusH_tbl_kernel_range__purge_umap_area_lazyum_unmam_aliaseschange_page_attr_set_clrset_memory_rofrob_text.isramodule_enable_ro kobject_createkobject_create_and_addload_module__symbol_putkernel_readsys_finit_moduleentry_SYSCALL_64_fastpath kmem_cache_alloc_traceallocate_cgrp_cset_links...sys_writeentry_SYSCALL-64_fastpath Linux also keeps promising that it's fixing the problem Fixing recursive fault but reboot is needed! I wish.. intel ucode I also tried installing the intel-ucode package in my Arch install. I saw in the dmesg logs that the microcodes were updated, but that unfortunately did not solve my problem. What could be the issue? How can fix it? EDIT Additional note. The general protection fault messages and "lock up detected"-type messages typically reference a CPU. I've seen CPU0 , CPU1 , CPU2 and CPU3 in these messages. It seems like something is causing the CPU's to not get along, like they're all in a deadlock trying to clear out cache memory or something. EDIT2 BIOS mentioned in error I see this bit of information in some errors: LENOVO 80U01LENOVO YOGA710-1 BIOS OGCN20WW(v1.04) 6/30/2016 Not sure if that is helpful to a pro in understanding the issue... EDIT3 maxcpus=1 I was looking for debugging options in the kernel params documentation and found maxcpus If I set max cpu's to 1, then the problem goes away. So it would seem that the problem is some kind of shared cache memory violation. EDIT3 maxcpus=1 + Gnome = broken again Although maxcpus=1 seemed to make the system work with just the 1 CPU, I installed gnome and then ran systemctl enable gdm.service Now, when I reboot, I get all of my errors back again, but this time they're all happening on CPU0 So it seems that something is still causing a memory violation even with the 1 CPU. EDIT4 nolapic So using nolapic seems to get everything "working" BUT by using nolapic , I effectively disable my other CPU and all multithreading in the 1 working CPU. I'm trying to use this for OpenMP, and after booting with nolapic , OpenMP and the linux kernel can only find 1 thread, 1 CPU. That sucks! I also tried intel_idle.max_cstate=0 and 1 , 2 , etc. But this does not fix the boot problem. What else could cause the kernel to fail to utilize my multi-core machine?
Turns out the issue was i2c_hid This seems to be some kind of touchpad driver. For some reason, when I disable it, I can still use my touchpad. It could be that the touch screen on the laptop was using this driver, too, because that doesn't work. I don't like to mung up my laptop screen with fingerprints, anyway... So bye bye i2c_hid ! I fixed it by adding this to the kernel params: modprobe.blacklist=i2c_hid Although nolapic also worked, it disabled all but 1 core in processors. I'd highly recommend to anyone else out there to not use apci=off or nolapic for this reason. Using these options is a nuclear weapon that might make your machine work, but you will lose a lot of performance and/or i/o devices as collateral damage. It's a good starting point to get booted, and then you can pour throught journalctl like I did to analyse the boots that fail. Good luck to those who find this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172271/" ] }
305,606
When I fire command vmstat -s on my Linux box, I got stats as: $ vmstat -s16305800 total memory16217112 used memory9117400 active memory6689116 inactive memory88688 free memory151280 buffer memory I have skipped some details that is shown with this command. I understand these terms: Active memory is memory that is being used by a particular process. Inactive memory is memory that was allocated to a process that is no longer running. Just want to know, is there any way I can get the processes, with which inactive memory is allocated? Because top or vmstat command still shows the used memory as sum of active and inactive memory and I can see only processes that are using active memory but what processes are using inactive memory is still a question for me.
There are cases where looking at inactive memory is interesting, a high ratio of active to inactive memory can indicate memory pressure for example, but that condition is usually accompanied by paging/swapping which is easier to understand and observe. Another case is being able to observe a ramping up or saw-tooth for active memory over time – this can give you some forewarning of inefficient software (I've seen this with naïve software implementations exhibiting O(n) type behavior and performance degradation). The file /proc/kpageflags contains a 64-bit bitmap for every physical memory page, you can get a summary with the program page-types which may come with your kernel. Your understanding of active and inactive is incorrect however active memory are pages which have been accessed "recently" inactive memory are pages which have not been accessed "recently" "recently" is not an absolute measure of time, but depends also on activityand memory pressure (you can read some of the technical details in the free book Understanding the Linux Virtual Memory Manager , Chapter 10 is relevant here), or the kernel documentation ( pagemap.txt ). Each list is stored as an LRU (more or less). Inactive memory pages are good candidates for writing to the swapfile, either pre-emptively (before free memory pages are required) or when free memory drops below a configured limit and free pages are (expected to be imminently) needed. Either flag applies to pages allocated to running processes, with the exception of persistent or shared memory all memory is freed when a process exits, it would be considered a bug otherwise. This low level page flagging doesn't need to know the PID (and a memory page can have more than one PID with it mapped in any case), so the information required to provide the data you request isn't in one place. To do this on a per-process basis you need to extract the virtual address ranges from /prod/PID/maps , convert to PFN (physical page) with /proc/PID/pagemap , and index into /proc/kpageflags . It's all described in pagemap.txt , and takes about 60-80 lines of C. Unless you are troubleshooting the VM system, the numbers aren't very interesting. One thing you could do is count the inactive and swap-backed pages per-process, these numbers should indicate processes which have a low RSS (resident) size compared with VSZ (total VM size). Another thing might be to infer a memory leak, but there are better tools for that task.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/305606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186413/" ] }
305,643
When I query the status of the NTP daemon with ntpdc -c sysinfo I get the following output: system peer: 0.0.0.0system peer mode: unspecleap indicator: 11stratum: 16precision: -20root distance: 0.00000 sroot dispersion: 12.77106 sreference ID: [73.78.73.84]reference time: 00000000.00000000 Thu, Feb 7 2036 7:28:16.000system flags: auth monitor ntp kernel statsjitter: 0.000000 sstability: 0.000 ppmbroadcastdelay: 0.000000 sauthdelay: 0.000000 s This indicates that the NTP sync failed. However the system time is accurate within 1 second precision. When I ran my system without network connection for the same period as I did now the system time would deviate ~10s. This behavior suggests that the system has another way of syncing the time. I realized that there is also systemd-timesyncd.service (with configuration file at /etc/systemd/timesyncd.conf ) and timedatectl status gives me the correct time: Local time: Thu 2016-08-25 10:55:23 CEST Universal time: Thu 2016-08-25 08:55:23 UTC RTC time: Thu 2016-08-25 08:55:22 Time zone: Europe/Berlin (CEST, +0200) NTP enabled: yesNTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-27 01:59:59 CET Sun 2016-03-27 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-10-30 02:59:59 CEST Sun 2016-10-30 02:00:00 CET So my question is what is the difference between the two mechanisms? Is one of them deprecated? Can they be used in parallel? Which one should I trust when I want to query the NTP sync status? (Note that I have a different system (in a different network) for which both methods indicate success and yield the correct time.)
systemd-timesyncd is basically a small client-only NTP implementation more or less bundled with newer systemd releases. It's more lightweight than a full ntpd but only supports time sync - i.e. it can't act as an NTP server for other machines. It's intended to replace ntpd for clients. You should not use both in parallel, as in theory they could pick different timeservers that have a slight delay between them, leading to your system clock being periodically "jumpy". To get the status, you unfortunately need to use ntpdc if you use ntpd and timedatectl if you use timesyncd, I know of no utility that can read both.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/305643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137240/" ] }
305,661
My question is similar to How do I prevent sed -i from destroying symlinks? , but concerning hardlinks. Using sed -i to work on a file destroys all the hardlinks the file has, since sed works by writing to a temporary file and then moving this.The --follow-symlinks parameter doesn't help in case of a hard link. Is there an alternative to using the rather ugly: sed 's/cat/dog/' pet_link > pet_link
For sed 's/cat/dog/' or any other substitution that doesn't change the size of the file, with any Bourne-like shell, you can do: sed 's/cat/dog/' < file 1<> file The little-known but over 35 year old¹ standard <> operator is to open a file in read+write mode without truncation . Basically, here that makes sed write its output over its input. It's important to make sure that the output doesn't overwrite sections of the file that sed has not read yet. For substitutions that cause the file size to decrease, with ksh93 : sed 's/hippopotamus/ant/' < file 1<>; file <>; , a ksh93 extension is the same as <> except that if the command being redirected succeeds, the file gets truncated where the command finished. Or with perl: perl -pe 's/hippopotamus/ant/; END{truncate STDOUT, tell STDOUT}' < file 1<> file For anything else, just use the standard form: cp -i file file.back && sed 's/dog/horse/g' < file.back > file # && rm -f file.back ¹ Though the initial implementation in the Bourne shell and early versions of the Korn shell was actually broken, fixed in the late 80s. And the Almquist shell initially didn't support it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/305661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186430/" ] }