source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
215,438 | I'd like to install php 56 on my Debian Wheeyz System. So I added the dotdeb repo to apt.
While fetching the key, an error occurred: # wget http://www.dotdeb.org/dotdeb.gpg -O- |apt-key add –
# gpg: can't open `–': No such file or directory What do I have to change to add the key to apt? | Your only problem is that the dash after apt-key add is not the ASCII 0x2D hyphen character, but the Unicode U+2013 en dash . The former instructs apt-key to read the key from the standard input (where the preceding wget provides it through the pipe), while the latter is not treated specially, thus interpreted as a file name to read the key from. Unsurprisingly, such a file does not seem to exist in your current directory. | {
"source": [
"https://unix.stackexchange.com/questions/215438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72143/"
]
} |
215,450 | Data
Name Age
Madhavan 29
saravana<Tab Press> ! How can i edit this data in vim in a tabular fashion? When i press tab key from column 1 in row 3, the cursor must move to the exclamatory mark position. Note: Exclamatory mark is just a marker position where i desire the cursor to go and it is not the real character in place. This can be done using org-table mode in emacs but it is overkill at times. So i am looking for simpler ways in vim/shell? | Your only problem is that the dash after apt-key add is not the ASCII 0x2D hyphen character, but the Unicode U+2013 en dash . The former instructs apt-key to read the key from the standard input (where the preceding wget provides it through the pipe), while the latter is not treated specially, thus interpreted as a file name to read the key from. Unsurprisingly, such a file does not seem to exist in your current directory. | {
"source": [
"https://unix.stackexchange.com/questions/215450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
215,558 | While attempting to SSH into a host I received the following message from xauth : /usr/bin/xauth: timeout in locking authority file /home/sam/.Xauthority NOTE: I was trying to remote display an X11 GUI via an SSH connection so I needed xauth to be able to create a $HOME/.Xauthority file successfully, but as that message was indicating, it was clearly not. Attempts to run any X11 based apps, such as xeyes were greeted with this message: $ xeyes
X11 connection rejected because of wrong authentication.
Error: Can't open display: localhost:10.0 How can I resolve this issue? | Running an strace on the remote system where xauth is failing will show you what's tripping up xauth . For example $ strace xauth list
stat("/home/sam/.Xauthority-c", {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
nanosleep({2, 0}, 0x7fff6c4430e0) = 0
open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
nanosleep({2, 0}, 0x7fff6c4430e0) = 0
open("/home/sam/.Xauthority-c", O_WRONLY|O_CREAT|O_EXCL, 0600) = -1 EEXIST (File exists)
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0 So xauth is attempting to open a file and it already exists. The culprit file is /home/sam/.Xauthority-c . We can confirm the presence of this file on the remote system: $ ls -l .Xauthority*
-rw------- 1 sam sam 55 Jul 12 22:04 .Xauthority
-rw------- 1 sam sam 0 Jul 12 22:36 .Xauthority-c
-rw------- 1 sam sam 0 Jul 12 22:36 .Xauthority-l The fix As it turns out. Those files are lock files for .Xauthority , so simply removing them resolves the issue. $ rm -fr .Xauthority-* With the files deleted, exit from the SSH connection and then reconnect. This will allow xauth to re-run successfully. $ ssh -t skinner ssh sam@blackbird
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-44-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Sun Jul 12 22:37:54 2015 from skinner.bubba.net
$ Now we're able to run xauth list and X11 applications without issue. $ xauth list
blackbird/unix:10 MIT-MAGIC-COOKIE-1 cf01f793d2a5ece0ea58196ab5a7977a The GUI $ xeyes Alternative method to resolve the issue I came across this post titled: xauth: error in locking authority file .Xauthority [linux, ssh, X11] which mentions the use of xauth -b to break any lock files that may be hanging around. xauth 's man page seems to back this up: -b This option indicates that xauth should attempt to break any
authority file locks before proceeding. Use this option only to
clean up stale locks. References Dealing with xauth “error in locking authority file” errors | {
"source": [
"https://unix.stackexchange.com/questions/215558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
215,598 | EDIT : The problem was that Apple uses permissions to mark backups and prevents you from modifying them (probably a security feature). By using chmod -RN <dir> I removed ACL data from all the folders with important data and that allowed me to make myself the owner and apply the appropriate permissions. Original question I have an extremely large backup (>700GB) that now has the wrong permissions (my UID changed during clean install, long story) and I need to change them. The time-consuming option is to manually go through each folder and change the permissions but that will take ages. I want to use chown to make myself the owner of all my important data and then use chmod 700 on all those folders to give rwx permissions to only me. The ideal solution is some method of using find to recursively look for folders matching a regex (my current one is .*/[DCV].*|Pictures|M[ou].* ) and then make my UID the owner and set the permissions to 700. The important bit that I can't grasp: However, when I try to run chown Me DirectoryName I get chown: DirectoryName: Operation not permitted . Everything I find is related to changing the permissions of a file and not a directory. Maybe I'm looking at this the wrong way? Something tells me there isn't a way of giving my UID rwx and --- to everyone else. How can I achieve this? I'm running Mac OS X 10.10.3. I know that this is a UNIX/Linux forum (and I'm running Mac) but this question is a lot more about using the shell, chown , chmod , and permissions and any solutions posted here will be applicable to any UNIX-based OS. It would be preferable if the posted solutions will make my older backups reappear in Time Machine. Thanks to all who have promptly replied, but chown just doesn't seem to work on directories for some reason. Is the fact that this is a .sparsebundle disk image on a network drive relevant? I assumed it would be the same as on any external drive. | I may have misunderstood. But you can recursively use chmod and chown eg. chown -R username:username /path/directory To recursively apply permission 700 you can use: chmod -r 700 /path/directory Of course the above is for Linux so not sure if mac osx is the same. EDIT: Yea sorry forgot to mention you need to be root to chown something, I just assumed u knew this...my bad. | {
"source": [
"https://unix.stackexchange.com/questions/215598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50840/"
]
} |
215,604 | xxxxxx15 |xxxxxx02|RM99999 |xxxxx |Ankur |xxxxx |xxxxxxxx|M|xxxxxxxx| | | |xxxxxxx|xxx|xxxxxxxx| |10 |New York| 23.00|F|P| | |NA Want to replace 10 with 65, closet I got is sed -i '/^.\{20\}RM99999/ s/^\(?:[^|]*\|\)\{16\}\([^|]*\)/\165/' test.txt But it replaces, the first character with 65 ( RM99999 can be in more locations, but need to replace the line which has RM99999 on 20th character) | I may have misunderstood. But you can recursively use chmod and chown eg. chown -R username:username /path/directory To recursively apply permission 700 you can use: chmod -r 700 /path/directory Of course the above is for Linux so not sure if mac osx is the same. EDIT: Yea sorry forgot to mention you need to be root to chown something, I just assumed u knew this...my bad. | {
"source": [
"https://unix.stackexchange.com/questions/215604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123156/"
]
} |
215,958 | I want to delete the 5th word of each line in a file. The current content of the file: File is not updated or and will be removed
System will shut down f within 10 seconds
Please save your work 55 or copy to other location
Kindly cooperate with us D Expected output: File is not updated and will be removed
System will shut down within 10 seconds
Please save your work or copy to other location
Kindly cooperate with us | How about cut : $ cut -d' ' -f1-4,6- file.txt
File is not updated and will be removed
System will shut down within 10 seconds
Please save your work or copy to other location
Kindly cooperate with us -d' ' sets the delimiter as space -f1-4,6- selects the first to 4th field (word), leaving the 5th one and then continue printing from 6th to the rest. | {
"source": [
"https://unix.stackexchange.com/questions/215958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
216,042 | I'm looking for a way to show all of the URLs in a redirect chain, preferably from the shell. I've found a way to almost do it with curl, but it only shows the first and last URL. I'd like to see all of them. There must be a way to do this simply, but I can't for the life of me find what it is. Edit: Since submitting this I've found out how to do it with Chrome (CTRL+SHIFT+I->Network tab). But, I'd still like to know how it can be done from the Linux command line. | How about simply using wget ? $ wget http://picasaweb.google.com 2>&1 | grep Location:
Location: /home [following]
Location: https://www.google.com/accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2<mpl=gp&passive=true [following]
Location: https://accounts.google.com/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%3A%2F%2Fpicasaweb.google.com%2Fhome&service=lh2<mpl=gp&passive=true [following] curl -v also shows some info, but looks not as useful as wget . $ curl -v -L http://picasaweb.google.com 2>&1 | egrep "^> (Host:|GET)"
> GET / HTTP/1.1
> Host: picasaweb.google.com
> GET /home HTTP/1.1
> Host: picasaweb.google.com
> GET /accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2<mpl=gp&passive=true HTTP/1.1
> Host: www.google.com
> GET /ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2<mpl=gp&passive=true HTTP/1.1
> Host: accounts.google.com | {
"source": [
"https://unix.stackexchange.com/questions/216042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70203/"
]
} |
216,101 | I would like to print specific information about network configuration for different interfaces over all the servers I manage: the interface name the interface ipv4 address the interface hardware mac address … Unfortunately, a simple ip -o addr show doesn't allow to parse easily its output with awk because of the line-breaks. Is it possible to have ip addr show printed on exactly one line per interface? Else, is it possible to achieve the same result using awk and/or sed ? This goes beyond my knowledge of those two commands since the lines have to be concatenated tree by tree… | Just use --brief flag. ip --brief address show | {
"source": [
"https://unix.stackexchange.com/questions/216101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40025/"
]
} |
216,280 | The Midnight Commander is a very helpful tool when we're using only the text mode. But sometimes it bothers me that I have to see all the hidden files inside a folder (files that begin with "."). I've tried to find how to do it changing some configurations by myself and then looking on the man page. But I didn't succeed. Does anyone know how can I do it? | Choose Options from the menu bar, then Panel options.
You have it right there, 5th option on the left column: "Show hidden files". | {
"source": [
"https://unix.stackexchange.com/questions/216280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
216,618 | I've heard confusion come up several times recently around what a Docker container is, and more specifically what's going on inside, with respect to commands & processes that I invoke while inside a Docker container. Can someone please provide a high level overview of what's going on? | Docker gets thrown into the virtualization bucket, because people assume that it's somehow virtualizing the hardware underneath. This is a misnomer that permeates from the terminology that Docker makes use of, mainly the term container. However Docker is not doing anything magical with respect to virtualizing a system's hardware. Rather it's making use of the Linux Kernel's ability to construct "fences" around key facilities, which allows for a process to interact with resources such as network, the filesystem, and permissions (among other things) to give the illusion that you're interacting with a fully functional system. Here's an example that illustrates what's going on when we start up a Docker container and then enter it through the invocation of /bin/bash . $ docker run -it ubuntu:latest /bin/bash
root@c0c5c54062df:/# Now from inside this container, if we run ps -eaf : Switching to another terminal tab where we're logged into the host system that's hosting the Docker container, we can see the process space that the container is "actually" taking up: Now if we go back to the Docker tab and launch several processes within it and background them all, we can see that we now have several child processes running under the primary Bash process which we originally started as part of the Docker container launch. NOTE: The processes are 4 sleep 1000 commands which are being backgrounded. Notice how inside the Docker container the processes are assigned process IDs (PIDs) of 48-51. See them in the ps -eaf output in their as well: However, with this next image, much of the "magic" that Docker is performing is revealed. See how the 4 sleep 1000 processes are actually just child processes to our original Bash process? Also take note that our original Docker container /bin/bash is in fact a child process to the Docker daemon too. Now if we were to wait 1000+ seconds for the original sleep 1000 commands to finish, and then run 4 more new ones, and start another Docker container up like this: $ docker run -it ubuntu:latest /bin/bash
root@450a3ce77d32:/# The host computer's output from ps -eaf would look like this: And other Docker containers, will all just show up as processes under the Docker daemon. So you see, Docker is really not virtualizing ( in the traditional sense ), it's constructing "fences" around the various Kernel resources and limiting the visibility to them for a given process + children. | {
"source": [
"https://unix.stackexchange.com/questions/216618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
216,644 | In the past, I learned that in Linux/UNIX file systems, directories are just files, which contain the filenames and inode numbers of the files inside the directory. Is there a simple way to see the content of a directory? I mean the way the files names and inodes are stored/organized. I'm not looking for ls , find or something similiar. I also don't want to see the content of the files inside a directory. I want to see the implementation of the directories. If every directory is just a text file with some content, maybe a simple way exists to see the content of this text file. In the bash in Linux it is not possible to do a cat folder . The output is just Is a directory . Update The question How does one inspect the directory structure information of a unix/linux file? addresses the same issue but it has no helpful solution like the one from mjturner . | The tool to display inode detail for a filesystem will be filesystem specific. For the ext2 , ext3 , ext4 filesystems (the most common Linux filesystems), you can use debugfs , for XFS xfs_db , for ZFS zdb . For btrfs some information is available using the btrfs command. For example, to explore a directory on an ext4 filesystem (in this case / is dev/sda1 ): # ls src
Animation.js Map.js MarkerCluster.js ScriptsUtil.js
Directions.js MapTypeId.js markerclusterer.js TravelMode.js
library.js MapUtils.js Polygon.js UnitSystem.js
loadScripts.js Marker.js Polyline.js Waypoint.js
# ls -lid src
664488 drwxrwxrwx 2 vagrant vagrant 4096 Jul 15 13:24 src
# debugfs /dev/sda1
debugfs: imap <664488>
Inode 664488 is part of block group 81
located at block 2622042, offset 0x0700
debugfs: dump src src.out
debugfs: quit
# od -c src.out
0000000 250 # \n \0 \f \0 001 002 . \0 \0 \0 204 030 \n \0
0000020 \f \0 002 002 . . \0 \0 251 # \n \0 024 \0 \f 001
0000040 A n i m a t i o n . j s 252 # \n \0
0000060 030 \0 \r 001 D i r e c t i o n s . j
0000100 s \0 \0 \0 253 # \n \0 024 \0 \n 001 l i b r
0000120 a r y . j s \0 \0 254 # \n \0 030 \0 016 001
0000140 l o a d S c r i p t s . j s \0 \0
0000160 255 # \n \0 020 \0 006 001 M a p . j s \0 \0
0000200 256 # \n \0 024 \0 \f 001 M a p T y p e I
0000220 d . j s 257 # \n \0 024 \0 \v 001 M a p U
0000240 t i l s . j s \0 260 # \n \0 024 \0 \t 001
0000260 M a r k e r . j s \0 \0 \0 261 # \n \0
0000300 030 \0 020 001 M a r k e r C l u s t e
0000320 r . j s 262 # \n \0 034 \0 022 001 m a r k
0000340 e r c l u s t e r e r . j s \0 \0
0000360 263 # \n \0 024 \0 \n 001 P o l y g o n .
0000400 j s \0 \0 264 # \n \0 024 \0 \v 001 P o l y
0000420 l i n e . j s \0 265 # \n \0 030 \0 016 001
0000440 S c r i p t s U t i l . j s \0 \0
0000460 266 # \n \0 030 \0 \r 001 T r a v e l M o
0000500 d e . j s \0 \0 \0 267 # \n \0 030 \0 \r 001
0000520 U n i t S y s t e m . j s \0 \0 \0
0000540 270 # \n \0 240 016 \v 001 W a y p o i n t
0000560 . j s \0 305 031 \n \0 214 016 022 001 . U n i
0000600 t S y s t e m . j s . s w p \0 \0
0000620 312 031 \n \0 p 016 022 001 . U n i t S y s
0000640 t e m . j s . s w x \0 \0 \0 \0 \0 \0
0000660 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 In the above, we start by finding the inode of directory src ( 664488 ) and then dump its contents into file src.out and then display that using od . As you can see, the contents of all of the files in that directory ( Animation.js , etc.) are visible in the dump. This is just a start - see the debugfs manual page or type help within debugfs for more information. If you're using ext4 , you can find more information about the structure and layout of directory entries in the kernel documentation . | {
"source": [
"https://unix.stackexchange.com/questions/216644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120962/"
]
} |
216,891 | How do I make sudo remember my password for longer so that I don't have to keep typing it? I do not want to sudo su and execute commands as root all the time. I am on Arch Linux and have tried to google this but I get examples to change my password, which is not what I'm after. | There is timestamp_timeout option in your /etc/sudoers . You can set up this option to number of minutes. After that time it will ask for password again. More info in man sudoers . And make sure you edit your sudoers file using visudo, which checks your syntax and which will not leave you with wrong configuration and inaccessible sudo . | {
"source": [
"https://unix.stackexchange.com/questions/216891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110915/"
]
} |
216,953 | I'm new to bash and would like my prompt to show something that in tcsh was trivial, yet after a good google search I still cannot do. I would like my prompt to include only the current and parent directories like this: /parent/currentdir $ In tcsh this is achieved by: set prompt = "%C2 %" However in bash so far I have only found that I have to parse pwd to obtain the same output. Isn't there a simpler way, like doing: export PS1="$(some_command) $" | Bash's prompt control features are rather static. If you want more control, you can include variables in your prompt; make sure you haven't turned off the promptvars option . PS1='${PWD#"${PWD%/*/*}/"} \$ ' Note the single quotes: the variable expansions must happen at the time the prompt is displayed, not at the time the PS1 variable is defined. If you want more control over what is displayed, you can use command substitutions. For example, the snippet above loses the ~ abbreviation for the home directory. PS1='$(case $PWD in
$HOME) HPWD="~";;
$HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
$HOME/*) HPWD="~/${PWD##*/}";;
/*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
*) HPWD="$PWD";;
esac; printf %s "$HPWD") \$ ' This code is rather cumbersome, so instead of sticking it into the PS1 variable, you can use the PROMPT_COMMAND variable to run code to set HPWD and then use that in your prompt. PROMPT_COMMAND='case $PWD in
$HOME) HPWD="~";;
$HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
$HOME/*) HPWD="~/${PWD##*/}";;
/*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
*) HPWD="$PWD";;
esac'
PS1='$HPWD \$' Since the shortened prompt only changed on a directory change, you don't need to recalculate it each time a prompt is displayed. Bash doesn't provide a hook that runs on a current directory change, but you can simulate it by overriding cd and its cousins. cd () { builtin cd "$@" && chpwd; }
pushd () { builtin pushd "$@" && chpwd; }
popd () { builtin popd "$@" && chpwd; }
chpwd () {
case $PWD in
$HOME) HPWD="~";;
$HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
$HOME/*) HPWD="~/${PWD##*/}";;
/*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";;
*) HPWD="$PWD";;
esac
}
PS1='$HPWD \$' Note that you don't need to, and should not, export PS1 , since it's a shell setting, not an environment variable. A bash PS1 setting wouldn't be understood by other shells. P.S. If you want a nice interactive shell experience, switch to zsh , where all of these (prompt % expansions largely encompassing tcsh's, chpwd , etc.) are native features. PS1='%2~ %# ' | {
"source": [
"https://unix.stackexchange.com/questions/216953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104149/"
]
} |
217,010 | I copied this line from /proc/some_proc_id/cmdline in my ubuntu machine, java^@-jar^@/usr/lib/selenium/selenium-server-standalone.jar^@-port^@4444^@-trustAllSSLCertificates^@ Somehow, the space characters are represented by ^@ in vi. I tried to replace them with space characters using the command, :%s#^@# #g But it says, pattern not found ^@ . How can one replace special characters particularly those that start with carat symbol? | Somehow, the space characters are represented by ^@ in vi. It's not vi that did that. Although you type command lines in shells with spaces between the arguments, command lines are actually discrete sequences of strings internally, not one long space-separated string. The shell separated the command line into individual argument strings before the command was launched. In C, strings are terminated with NUL characters and those are shown as ^@ . How can one replace special characters particularly those that start with carat symbol? In order to type those characters, you must prefix them with Control - v for literal next character. For example in this case: Control - v followed by Control - @ . The special character that introduces literal next characters is normally Control - v but it is actually configurable. Type stty -a to find out what it is set to. Look for lnext in the output. | {
"source": [
"https://unix.stackexchange.com/questions/217010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
217,295 | Am I wrong in my interpretation that I should basically just put first before all make rules: .PHONY: all of my rules
all:
echo "Executing all ..."
of:
echo "Executing of ..."
my:
echo "Executing my ..."
rules:
echo "Executing rules ..." Is there ever a case where you would not want to follow this 'formula'? http://www.gnu.org/software/make/manual/make.html#Phony-Targets | Clark Grubb's Makefile style guide recommends that: All phony targets should be declared by making them prerequisites of .PHONY. add each phony target as a prerequisite of .PHONY immediately before the target declaration, rather than listing all the phony targets in a single place. No file targets should be prerequisites of .PHONY. phony targets should not be prerequisites of file targets. For your example, this would mean: .PHONY: all
all:
echo "Executing all ..."
.PHONY: of
of:
echo "Executing of ..."
.PHONY: my
my:
echo "Executing my ..."
.PHONY: rules
rules:
echo "Executing rules ..." Multiple PHONY targets are allowed; see also this Stack Overflow question: "Is it possible to have multiple .PHONY targets in a gnu makefile?" Also, while this isn't mentioned directly in your question, care must be taken not to have a PHONY target with the same name of an actual input or intermediate files in your project. Eg, if your project hypothetically had a source code file named rules (with no suffix), the inclusion of that string in a PHONY target could break expected make behavior. | {
"source": [
"https://unix.stackexchange.com/questions/217295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
217,369 | I am building an image for an embedded Linux based on Debian. I did use apt-get update before on the device that I want to use as a base for that image, so the lists under /var/lib/apt/lists are quite large (almost 100 MB in size). I want to keep apt-get functionality (so I don't want to remove apt repositories) but I want to free the space used up in these lists (the lists almost double the size of the image). Does anyone know how to do that? Can I just delete the files under /var/lib/apt/lists ? | You can just use: rm /var/lib/apt/lists/* This will remove the package lists. No repositories will be deleted, they are configured in the config file in /etc/apt/sources.list . All that can happen is that tools like apt-cache cannot get package information unless you updated the package lists. Also apt-get install will fail with E: Unable to locate package <package> , because no information is available about the package. Then just run: apt-get update to rewrite those lists and the command will work again. Anyway, it's recommended to run apt-get update before installing anything. | {
"source": [
"https://unix.stackexchange.com/questions/217369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116266/"
]
} |
217,622 | I need to add a path in a bash script, but it may be executed several times: export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/bin/:${PATH} I don't want that path to be added over and over. How can I add it if it is not in $PATH yet? | First check if the path to add is already part of the variable: [[ ":$PATH:" != *":/path/to/add:"* ]] && PATH="/path/to/add:${PATH}" If /path/to/add is already in the $PATH , then nothing happens, else it is added at the beginning. If you need it at the end use PATH=${PATH}:/path/to/add instead. Edit : In you case it would look like this: [[ ":$PATH:" != *":${OPENSHIFT_HOMEDIR}/app-root/runtime/bin:"* ]] && PATH="${OPENSHIFT_HOMEDIR}/app-root/runtime/bin:${PATH}" | {
"source": [
"https://unix.stackexchange.com/questions/217622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15657/"
]
} |
217,628 | I have a filename like a.b.c.txt , I want this string to be split as string1=a.b.c
string2=txt Basically I want to split filename and its extension. I used cut but it splits as a,b,c and txt . I want to cut the string on the last delimiter. Can somebody help? | #For Filename
echo "a.b.c.txt" | rev | cut -d"." -f2- | rev
#For extension
echo "a.b.c.txt" | rev | cut -d"." -f1 | rev | {
"source": [
"https://unix.stackexchange.com/questions/217628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15340/"
]
} |
217,905 | I've looked around and bit for an answer to this question but I don't seem to find it (which is weird). My question is, is there any simple way to restart the BASH session from within the terminal on Mac. I just want the same behaviour as if I closed the terminal application and started it again (all variables reset, .bash_profile sourced etc). I know how to source .bash_profile , but that's not what I want. One of the reasons I want to do this is because a plugin for my BASH prompt has code that prevents colors from being loaded multiple times. Therefore, sourcing .bash_profile doesn't reload the color variables and I have to restart the terminal application to get changes in effect. | exec bash should replace the current shell process with (a new instance of) bash. EDIT: Seems from answers below that Catalina replaces bash with zsh. You can run exec zsh in that case, or alternatively exec "$SHELL" which should always use your default shell. | {
"source": [
"https://unix.stackexchange.com/questions/217905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124423/"
]
} |
217,936 | I have a bunch of binaries and I know that inside these binaries there are strings I want to find. I want to do a: grep -lir "the string I am looking for" and get a list of all binaries inside a particular directory that contain that string but grep -lir is apparently not working with these files. Is there a command that can do this kind of search from terminal? | With GNU grep , you can use -a option to make it treats binary files as text files: grep -ali -- string file If your grep version does not support -a , you can use ack instead. With ack 1.x, you need to include -a option, with ack 2.x, you don't, since when searching include non-text file by default (only ignored non-text file when you did not specify any files). | {
"source": [
"https://unix.stackexchange.com/questions/217936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
217,939 | I'd like to know whether there are any Systemd equivalents for *BSD distributions, basically something that can handle dependencies between services (service A requires B to be started, so until B is ready don't start A) and has a sane service file format (like a configuration file that tells it what to start and when, instead of an initscript). After using it on Linux I can't even think of going back to a legacy initscripts-based distribution, and yet I'd like to try a BSD (I need a very minimal system for a router & access point). | With GNU grep , you can use -a option to make it treats binary files as text files: grep -ali -- string file If your grep version does not support -a , you can use ack instead. With ack 1.x, you need to include -a option, with ack 2.x, you don't, since when searching include non-text file by default (only ignored non-text file when you did not specify any files). | {
"source": [
"https://unix.stackexchange.com/questions/217939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124721/"
]
} |
218,074 | I wanted to find out how many cores my system has, so I searched the same question in Google. I got some commands such as the lscpu command.
When I tried this command, it gave me the following result: $ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Stepping: 10
CPU MHz: 1998.000
BogoMIPS: 5302.48
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 2048K
NUMA node0 CPU(s): 0-3 In particular, this output shows: CPU(s): 4 Core(s) per socket: 4 CPU family: 6 Which of those indicates cores of a Linux system? Is there any other command to tell the number of cores, or am I assuming it is completely wrong? | You have to look at sockets and cores per socket. In this case you have 1 physical CPU (socket) which has 4 cores (cores per socket). | {
"source": [
"https://unix.stackexchange.com/questions/218074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124316/"
]
} |
218,076 | There are different sources and different practices are suggested. I found the following proposals how often one should run fstrim. run weekly by cron run daily by cron run at each boot What is the optimal option and why? Ubuntu 14.04 uses the first option by default. | TRIM does at least three things: minimize write amplification prevent long-term performance degradation irrecoverably delete your data Now it depends where your priorities are. For 1), you should not be using fstrim at all, but make use of the discard option of your filesystem. Only if everything is trimmed instantly will the SSD stop copying no longer needed bits of data around. In practice though, it has been shown that preventing write amplification is not that important since SSD are fine with lots of writes. For 2), using fstrim weekly or even monthly is completely fine. There is no need to use instant discard, or to trim daily - that would be a short-term measure, but this is about keeping the SSD happy in the long-term. But it also depends on your usage: if your filesystem is always full and sees lots of writes, you might need to trim more regularly than if you tend to have lots of free space and not that much writes in your filesystems. For 3), you should not be using any kind of trim at all. Basically if you expect to be human, making errors, having accidents - like you just deleted your photo collection, whoops - recovery tools like photorec won't work after TRIM because with TRIM everything is gone forever. From a pure data recovery point of view, SSD is a huge headache. There's too much trim happening in Linux, even without asking you ( mkfs implies trim, lvremove / lvresize /... might if issue_discards , some partitioners might be having ideas, ...). Suddenly previously reversible actions become irreversible, all for the sake of getting a few more points in some filesystem benchmark... If you decide on fstrim you should know where the cron job is located so you can disable it when you have an accident, that way you get a compromise between 2) and 3). In general with SSD you should make sure you have good backups, they are even more important than with HDD since you have lesser chance of recovery on SSD. | {
"source": [
"https://unix.stackexchange.com/questions/218076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61003/"
]
} |
218,163 | How to install Cuda Toolkit 7.0 or 8 on Debian 8? I know that Debian 8 comes with the option to download and install CUDA Toolkit 6.0 using apt-get install nvidia-cuda-toolkit , but how do you do this for CUDA toolkit version 7.0 or 8? I tried installing using the Ubuntu installers, as described below: sudo wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.0-28_amd64.deb
dpkg -i cuda-repo-ubuntu1404_7.0-28_amd64.deb
sudo apt-get update
sudo apt-get install -y cuda However it did not work and the following message was returned: Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cuda : Depends: cuda-7-0 (= 7.0-28) but it is not going to be installed
E: Unable to correct problems, you have held broken packages. | The following instructions are valid for CUDA 7.0, 7.5, and several previous (and probably later) versions. As far as Debian distributions, they're valid for Jessie and Stretch and probably other versions. They assume an amd64 (x86_64) architecture, but you can easily adapt them for x86 (x86_32). Installation prerequisites g++ - You should use the newest GCC version supported by your version of CUDA. For CUDA 7.x this would be version 4.9.3, last of the 4.x line; for CUDA 8.0, GCC 5.x versions are supported. If your distribution uses GCC 5.x by default, use that, otherwise GCC 5.4.0 should do. Earlier versions are usable but I wouldn't recommend them, if only for the better modern-C++ feature support for host-side code. gcc - comes with g++. I even think CMake might default to having nvcc invoke gcc rather than g++ in some cases with a -x switch (but not sure about this). libGLU - Mesa OpenGL libraries (+ development files?) libXi - X Window System Xinput extension libraries (+ development files?) libXmu - X Window System "miscellaneous utilities" library (+ development files?) Linux kernel - headers for the kernel version you're running. If you want a list of specific packages - well, that depends on exactly which distribution you're using. But you can try the following (for CUDA 7.x): sudo apt-get install gcc g++ gcc-4.9 g++-4.9 libxi libxi6 libxi-dev libglu1-mesa libglu1-mesa-dev libxmu6 libxmu6-dev linux-headers-amd64 linux-source And you might add some -dbg versions of those packages for debugging symbols. I'm pretty sure this covers it all - but I might have missed something I just had installed already. Also, CUDA can work with clang , at least experimentally, but I haven't tried that. Installing the CUDA kernel driver Go to NVIDIA's CUDA Downloads page . Choose Linux > x86_64 > Ubuntu , and then whatever latest version they have (at the time of writing: Ubuntu 15.04). Choose the .run file option. Download the .run file (currently this one ). Make sure not to put it in /tmp . Make the .run file executable: chmod a+x cuda_7.5.18_linux.run . Become root. Execute the .run file: Pretend to accept their silly shrink-wrap license; say "yes" to installing just the NVIDIA kernel driver, and say "no" to everything else. The installation should tell you it expects to have installed the NVIDIA kernel driver, but that you should reboot before continuing/retrying the toolkit installation. So... Having apparently succeeded, reboot. Installing CUDA itself Be root. Locate and execute cuda_7.5.18_linux.run This time around, say No to installing the driver, but Yes to installing everything else, and accept the default paths (or change them, whatever works for you). The installer is likely to now fail . That is a good thing assuming it's the kind of failure we expect: It should tell you your compiler version is not supported - CUDA 7.0 or 7.5 supports up to gcc 4.9 and you have some 5.x version by default. Now, if you get a message about missing libraries , that means my instructions above regarding prerequisites somehow failed, and you should comment here so I can fix them. Assuming you got the "good failure", proceed to: Re-invoke the .run file, this time with the --override option. Make the same choices as in step 11. CUDA should now be installed, by default under /usr/local/cuda (that's a symlink). But we're not done! Directing NVIDIA's nvcc compiler to use the right g++ version NVIDIA's CUDA compiler actually calls g++ as part of the linking process and/or to compile actual C++ rather than .cu files. I think. Anyway, it defaults to running whatever's in your path as g++ ; but if you place another g++ under /usr/local/cuda/bin , it will use that first! So... Execute symlink /usr/bin/g++-4.9 /usr/local/cuda/bin/g++ (and for good measure, maybe also symlink /usr/bin/gcc-4.9 /usr/local/cuda/bin/gcc . That's it. Trying out the installation cd /root/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd make The build should conclude successfully, and when you do ./vectorAdd you should get the following output: root@mymachine:~/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd# ./vectorAdd
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done Notes You don't need to install the NVIDIA GDK (GPU Development Kit), but it doesn't hurt and it might be useful for some. Install it to the root directory of your system; it's pretty safe and there's an uninstaller afterwards: /usr/bin/uninstall_gdk.pl . In CUDA 8 it's already integrated into CUDA itself IIANM. Do not install additional packages with names like nvidia-... or cuda... ; they might not hurt but they'll certainly not help. Before doing any of these things, you might want to make sure your GPU is recognized at all, using lspci | grep -i nvidia . | {
"source": [
"https://unix.stackexchange.com/questions/218163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124679/"
]
} |
218,169 | To launch a root shell on machines where the root account is disabled, you can run one of: sudo -i : run an interactive login shell (reads /root/.bashrc and /root/.profile ) sudo -s : run a non-login interactive shell (reads /root/.bashrc ) In the Ubuntu world, I very often see sudo su suggested as a way to get a root shell. Why run two separate commands when one will do? As far as I can tell, sudo -i is equivalent to sudo su - and sudo -s is the same as sudo su . The only differences seem to be (comparing sudo -i on the left and sudo su - on the right): And comparing sudo -s (left) and sudo su (right): The main differences (ignoring the SUDO_foo variables and LS_COLORS ) seem to be the XDG_foo system variables in the sudo su versions. Are there any cases where that difference warrants using the rather inelegant sudo su ? Can I safely tell people (as I often have) that there's never any point in running sudo su or am I missing something? | As you stated in your question, the main difference is the environment. sudo su - vs. sudo -i In case of sudo su - it is a login shell, so /etc/profile , .profile and .bashrc are executed and you will find yourself in root's home directory with root's environment. sudo -i is nearly the same as sudo su - The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login shell. This means that login-specific resource files such as .profile , .bashrc or .login will be read and executed by the shell. sudo su vs. sudo -s sudo su calls sudo with the command su . Bash is called as interactive non-login shell. So bash only executes .bashrc . You can see that after switching to root you are still in the same directory: user@host:~$ sudo su
root@host:/home/user# sudo -s reads the $SHELL variable and executes the content. If $SHELL contains /bin/bash it invokes sudo /bin/bash , which means that /bin/bash is started as non-login shell, so all the dot-files are not executed, but bash itself reads . bashrc of the calling user. Your environment stays the same. Your home will not be root's home. So you are root, but in the environment of the calling user. Conclusion The -i flag was added to sudo in 2004 , to provide a similar function to sudo su - , so sudo su - was the template for sudo -i and meant to work like it. I think it doesn't really matter which you use, unless the environment isn't important. Addition A basic point that must be mentioned here is that sudo was designed to run only one single command with higher privileges and then drop those privileges to the original ones. It was never meant to really switch the user and leave open a root shell. Over the time, sudo was expanded with such mechanisms, because people were annoyed about why to use sudo in front of every command. So the meaning of sudo was abused. sudo was meant to encourage the user to minimize the use of root privileges. What we have now, is sudo becomes more and more popular. It is integrated in nearly every well known linux distribution. The original tool to switch to another user account is su . For an old school *nix veteran such thing like sudo might seem needless. It adds complexity and behaves more likely to the mechanisms we know from Microsofts os-family, and thus is in contrary to the philosophy of simplicity of *nix systems. I'm not really a veteran, but also in my opinion sudo was always a thorn in my side, from the time is was introduced and I always worked around the usage of sudo , if it was possible. I am most reluctant to use sudo . On all my systems, the root account is enabled. But things change, maybe the time will come, when su will be deprecated and sudo replaces su completely. Therefore I think, it will be the best to use sudo 's internal mechanisms ( -s , -i ) instead of relying on an old tool such as su . | {
"source": [
"https://unix.stackexchange.com/questions/218169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
218,172 | I have a bunch of directories in which there are 3 .csv files with different names. For example, in my directories aa bb cc dd there are 3 files in each: aa: EA_sing_aa.csv EA_ska_aa.csv EA_tat_aa.csv
bb: EA_sing_bb.csv EA_ska_bb.csv EA_tat_bb.csv
cc: EA_sing_cc.csv EA_ska_cc.csv EA_tat_cc.csv
dd: EA_sing_dd.csv EA_ska_dd.csv EA_tat_dd.csv I want to add the name of each file to a new column as row names to each files and then combine all EA_sing*.csv files together and combine all EA_ska*.csv files together and also combine all EA_tat*.csv files together!
my out put will be just 3 files: 1) EA_sing.csv ##the first column for the rows from EA_sing_aa.csv file
will be aa and for the rows from EA_sing_bb.csv will be bb
and for the rows from EA_sing_cc.csv will be cc..... ##
2) EA_ska.csv
3) EA-tat.csv How can I do this in *nix?
Thanks | As you stated in your question, the main difference is the environment. sudo su - vs. sudo -i In case of sudo su - it is a login shell, so /etc/profile , .profile and .bashrc are executed and you will find yourself in root's home directory with root's environment. sudo -i is nearly the same as sudo su - The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login shell. This means that login-specific resource files such as .profile , .bashrc or .login will be read and executed by the shell. sudo su vs. sudo -s sudo su calls sudo with the command su . Bash is called as interactive non-login shell. So bash only executes .bashrc . You can see that after switching to root you are still in the same directory: user@host:~$ sudo su
root@host:/home/user# sudo -s reads the $SHELL variable and executes the content. If $SHELL contains /bin/bash it invokes sudo /bin/bash , which means that /bin/bash is started as non-login shell, so all the dot-files are not executed, but bash itself reads . bashrc of the calling user. Your environment stays the same. Your home will not be root's home. So you are root, but in the environment of the calling user. Conclusion The -i flag was added to sudo in 2004 , to provide a similar function to sudo su - , so sudo su - was the template for sudo -i and meant to work like it. I think it doesn't really matter which you use, unless the environment isn't important. Addition A basic point that must be mentioned here is that sudo was designed to run only one single command with higher privileges and then drop those privileges to the original ones. It was never meant to really switch the user and leave open a root shell. Over the time, sudo was expanded with such mechanisms, because people were annoyed about why to use sudo in front of every command. So the meaning of sudo was abused. sudo was meant to encourage the user to minimize the use of root privileges. What we have now, is sudo becomes more and more popular. It is integrated in nearly every well known linux distribution. The original tool to switch to another user account is su . For an old school *nix veteran such thing like sudo might seem needless. It adds complexity and behaves more likely to the mechanisms we know from Microsofts os-family, and thus is in contrary to the philosophy of simplicity of *nix systems. I'm not really a veteran, but also in my opinion sudo was always a thorn in my side, from the time is was introduced and I always worked around the usage of sudo , if it was possible. I am most reluctant to use sudo . On all my systems, the root account is enabled. But things change, maybe the time will come, when su will be deprecated and sudo replaces su completely. Therefore I think, it will be the best to use sudo 's internal mechanisms ( -s , -i ) instead of relying on an old tool such as su . | {
"source": [
"https://unix.stackexchange.com/questions/218172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124858/"
]
} |
218,270 | I've found some special parameters in Bash that start with $ (the dollar sign). For example, when I wanted to know the exit status, I used $? . For getting the process ID, there's $$ . What are the special Bash (shell) parameters and their usage? | Referring to 3.4.2 Special Parameters from the Bash Reference Manual . Special Parameters: * ( $* ) Expands to the positional parameters, starting from one. When the expansion is not within double quotes, each positional parameter expands to a separate word. In contexts where it is performed, those words are subject to further word splitting and pathname expansion. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That is, "$*" is equivalent to "$1c$2c…" , where c is the first character of the value of the IFS variable. If IFS is unset, the parameters are separated by spaces. If IFS is null, the parameters are joined without intervening separators. @ ( $@ ) Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$@" is equivalent to "$1" "$2" … . If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed). # ( $# ) Expands to the number of positional parameters in decimal. ? ( $? ) Expands to the exit status of the most recently executed foreground pipeline. - ( $- , a hyphen.) Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option). $ ( $$ ) Expands to the process ID of the shell. In a () subshell, it expands to the process ID of the invoking shell, not the subshell. ! ( $! ) Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin (see Job Control Builtins ). 0 ( $0 ) Expands to the name of the shell or shell script. This is set at shell initialization. If Bash is invoked with a file of commands (see Shell Scripts ), $0 is set to the name of that file. If Bash is started with the -c option (see Invoking Bash ), then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke Bash, as given by argument zero. This can also be printed from the man page of Bash: $ man bash | awk '/Special Parameters$/','/Shell Variables$/' The above are the same as the special parameters defined in POSIX . In addition, there are the positional parameters $1 , $2 , ... that contain the command line arguments to the shell or the current function ( 3.4.1 Positional Parameters ). They are also a POSIX feature. Older versions of Bash also listed $_ as a special parameter, but it's now listed among other variables set by the shell ( 5.2 Bash Variables ). $_ is not POSIX and other shells may not support it. _ ( $_ , an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file. | {
"source": [
"https://unix.stackexchange.com/questions/218270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
218,557 | I am facing some issue with creating soft links. Following is the original file. $ ls -l /etc/init.d/jboss
-rwxr-xr-x 1 askar admin 4972 Mar 11 2014 /etc/init.d/jboss Link creation is failing with a permission issue for the owner of the file: ln -sv jboss /etc/init.d/jboss1
ln: creating symbolic link `/etc/init.d/jboss1': Permission denied
$ id
uid=689(askar) gid=500(admin) groups=500(admin) So, I created the link with sudo privileges: $ sudo ln -sv jboss /etc/init.d/jboss1
`/etc/init.d/jboss1' -> `jboss'
$ ls -l /etc/init.d/jboss1
lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss Next I tried to change the ownership of the soft link to the original user. $ sudo chown askar.admin /etc/init.d/jboss1
$ ls -l /etc/init.d/jboss1
lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss But the permission of the soft link is not getting changed. What am I missing here to change the permission of the link? | On a Linux system, when changing the ownership of a symbolic link using chown , by default it changes the target of the symbolic link (ie, whatever the symbolic link is pointing to ). If you'd like to change ownership of the link itself, you need to use the -h option to chown : -h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink) For example: $ touch test
$ ls -l test*
-rw-r--r-- 1 mj mj 0 Jul 27 08:47 test
$ sudo ln -s test test1
$ ls -l test*
-rw-r--r-- 1 mj mj 0 Jul 27 08:47 test
lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test
$ sudo chown root:root test1
$ ls -l test*
-rw-r--r-- 1 root root 0 Jul 27 08:47 test
lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test Note that the target of the link is now owned by root. $ sudo chown mj:mj test1
$ ls -l test*
-rw-r--r-- 1 mj mj 0 Jul 27 08:47 test
lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test And again, the link test1 is still owned by root, even though test has changed. $ sudo chown -h mj:mj test1
$ ls -l test*
-rw-r--r-- 1 mj mj 0 Jul 27 08:47 test
lrwxrwxrwx 1 mj mj 4 Jul 27 08:47 test1 -> test And finally we change the ownership of the link using the -h option. | {
"source": [
"https://unix.stackexchange.com/questions/218557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37444/"
]
} |
218,668 | I always use either rsync or scp in order to copy files from/to a remote machine. Recently, I discovered in the manual of scp ( man scp ) the flag -C -C Compression enable. Passes the -C flag to
ssh(1) to enable compression. Before I discovered this flag, I used to zip before and then scp . Is it as efficient to just use the -C than zipping and unzipping? When is using one or another process make the transfer faster? | It's never really going to make any big difference, but zipping the file before copying it ought to be a little bit less efficient since using a container format such as zip that can encapsulate multiple files (like tar ) is unnecessary and it is not possible to stream zip input and output (so you need a temporary file). Using gzip on the other hand, instead of zip ought to be exactly the same since it's what ssh -C does under the hood... except that gzipping yourself is more work than just using ssh -C . | {
"source": [
"https://unix.stackexchange.com/questions/218668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114428/"
]
} |
218,673 | How is install different from a simple copy, cp or dd ? I just compiled a little utility and want to add it to /usr/sbin so it becomes available via my PATH variable. Why use one vs the other? | To "install" a binary compiled from source the best-practice would be to put it under the directory: /usr/local/bin On some systems that path is already in your PATH variable, if not you can add it by adapting the PATH variable in one of your profile configuration files ~/.bashrc ~/.profile PATH=${PATH}:/usr/local/bin dd is a low level copy tool that is mostly used to copy exactly sized blocks of the source which could be for example a file or a device. cp is the common command to copy files and directories also recursively with the option -r and by preserving the permissions with the option -p . install is mostly similar to cp but provides additionally the option to set the destination file properties directly without having to use chmod separately. cp your files to /usr/local/bin and adapt the PATH variable if needed. That's what I would do. | {
"source": [
"https://unix.stackexchange.com/questions/218673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
218,815 | I usually connect to remote linux servers from a specific windows server (W1). On the Windows side, I use putty and on the linux side, I start tmux . Occasionally, I have to use a different windows server (W2) and connect to the same tmux sessions. Problem: If I had set a size for the putty windows on W1, then I can not exceed this size on W2. When I maximise the putty window, the extra space is unusable, filled with ~ characters. Is there a way to "force" resize on W2, even if that means W1 will show only partial output ? Or a way to make W1 get disconnected from tmux session ? | With tmux list-client , you can list all clients connected to tmux sessions. For instance: $ tmux list-client
/dev/pts/6: 0 [25x80 xterm] (utf8)
/dev/pts/8: 0 [25x80 xterm] (utf8) From this point, you can choose to detach a specified client, or all clients of a specified session. Say I want to detach everyone connected to session 0: $ tmux detach-client -s 0 Then, you can attach the session so the size will be yours. Actually, all that can be done with tmux attach -d (the -d option force all other clients to detach). | {
"source": [
"https://unix.stackexchange.com/questions/218815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54246/"
]
} |
218,816 | I want to copy and rename multiple c source files in a directory. I can copy like this: $ cp *.c $OTHERDIR But I want to give a prefix to all the file names: file.c --> old#file.c How can I do this in 1 step? | a for loop: for f in *.c; do cp -- "$f" "$OTHERDIR/old#$f"; done I often add the -v option to cp to allow me to watch the progress. | {
"source": [
"https://unix.stackexchange.com/questions/218816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119219/"
]
} |
218,911 | On my Kubuntu 14.4 (which has python 2.7.6 as standard) my python is broken after I tried to install python 2.7.10 after building from source from python.org with the help of How to install the latest Python version on Debian separately or upgrade? . I am not able to repair it with the standard commands I suspect that my dpkg is somehow confused/broken regarding the python installation. I would like to fix dpkg in this aspect. I suspect that this has something to do with the file /var/lib/dpkg/status and /var/lib/dpkg/available and /var/lib/dpkg/info/* particularily the first. I think I have to reset dpkg somehow, but I am really no expert. The reason why I think this is: $ apt-cache policy python
python:
Installed: 2.7.10-1
Candidate: 2.7.10-1
Version table:
*** 2.7.10-1 0
100 /var/lib/dpkg/status
2.7.5-5ubuntu3 0
500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
$ /usr/bin/python2.7
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit() The Reason I tried to install python 2.7.10 is because I needed it for another program (because of issues with ssl / openssl of python 2.7.6), but now I just want to get my system repaired - just let it be python 2.7.6. The Full Technical I started trying to solve this by asking on ubuntu https://askubuntu.com/questions/648424/muon-is-gone-after-change-of-python-issues-after-python-2-7-10-installation-on but I did not get any answer there. Maybe it was the wrong crowd. I have tried quite a bit since then and have an idea what's the problem, but don't know the steps to accomplish this. It started with me not being able to install muon with sudo apg-get install muon : $ sudo apt-get install muon
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
muon : Depends: apt-xapian-index but it is not going to be installed
E: Unable to correct problems, you have held broken packages. The typical advice (e.g. from https://askubuntu.com/questions/118749/package-system-is-broken-how-to-fix-it ) does not help: sudo apt-get autoremove
sudo apt-get clean
sudo apt-get autoclean
sudo apt-get update
sudo apt-get upgrade -f
sudo apt-get -f install muon or sudo apt-get -f install or sudo dpkg --configure -a
sudo apt-get update && sudo apt-get dist-upgrade
sudo apt-get install muon or sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install muon did not help. So I tried $ sudo apt-get install apt-xapian-index
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
apt-xapian-index : Depends: python-xapian (>= 1.0.2) but it is not going to be installed
Depends: python-apt (>= 0.7.93.2) but it is not going to be installed
Depends: python-debian (>= 0.1.14) but it is not going to be installed
Depends: python:any (>= 2.7.1-0ubuntu2)
E: Unable to correct problems, you have held broken packages. and found out the issue is with other programs as well like $ sudo apt-get install meld
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
meld : Depends: python:any (>= 2.7.1-0ubuntu2)
Depends: python-gtk2 (>= 2.14) but it is not going to be installed
Depends: python-glade2 (>= 2.14) but it is not going to be installed
Depends: python-gobject-2 (>= 2.16) but it is not going to be installed
Recommends: python-gnome2 but it is not going to be installed
Recommends: python-gconf but it is not going to be installed
Recommends: python-gtksourceview2 (>= 2.4) but it is not going to be installed
E: Unable to correct problems, you have held broken packages. So I tried (without luck) $ sudo update-alternatives --config python
update-alternatives: error: no alternatives for python The following did not help either: sudo dpkg -P python2.7
sudo apt-get install python2.7
sudo dpkg -P python-minimal
sudo apt-get autoremove && sudo apt-get clean sudo apt-get update && sudo apt-get -f install I am getting $ apt-cache policy python
python:
Installed: 2.7.10-1
Candidate: 2.7.10-1
Version table:
*** 2.7.10-1 0
100 /var/lib/dpkg/status
2.7.5-5ubuntu3 0
500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages Trying to reinstall python does not work $ sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install python
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reinstallation of python is not possible, it cannot be downloaded.
0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded. or $ sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install python2
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package python2 and trying to build an uninstaller does not work either: ~/Python-2.7.10$ sudo make uninstall
make: *** No rule to make target `uninstall'. Stop. So I started to suspect that I have to get dpkg fixed somehow, because $ apt-cache policy python
python:
Installed: 2.7.10-1
Candidate: 2.7.10-1
Version table:
*** 2.7.10-1 0
100 /var/lib/dpkg/status
2.7.5-5ubuntu3 0
500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
$ /usr/bin/python2.7
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit() More information (Appendix) $ dpkg -l python* | grep -v ^un
Gewünscht=Unbekannt/Installieren/R=Entfernen/P=Vollständig Löschen/Halten
| Status=Nicht/Installiert/Config/U=Entpackt/halb konFiguriert/
Halb installiert/Trigger erWartet/Trigger anhängig
|/ Fehler?=(kein)/R=Neuinstallation notwendig (Status, Fehler: GROSS=schlecht)
||/ Name Version Architektur Beschreibung
+++-===========================================-=======================================-============-=====================================================================================================================================================================================================================
ii python 2.7.10-1 amd64 Python 2.7.10
ii python-apt-common 0.9.3.5ubuntu1 all Python interface to libapt-pkg (locales)
ii python-chardet-whl 2.2.1-2~ubuntu1 all universal character encoding detector
ii python-colorama-whl 0.2.5-0.1ubuntu2 all Cross-platform colored terminal text in Python - Wheels
ii python-cups 1.9.66-0ubuntu2 amd64 Python bindings for CUPS
rc python-cupshelpers 1.4.3+20140219-0ubuntu2.6 all Python modules for printer configuration with CUPS
ii python-dbus-dev 1.2.0-2build2 all main loop integration development files for python-dbus
ii python-distlib-whl 0.1.8-1ubuntu1 all low-level components of python distutils2/packaging
rc python-gobject-2 2.28.6-12build1 amd64 deprecated static Python bindings for the GObject library
ii python-html5lib-whl 0.999-3~ubuntu1 all HTML parser/tokenizer based on the WHATWG HTML5 specification
ii python-ldb 1:1.1.16-1 amd64 Python bindings for LDB
ii python-minimal 2.7.5-5ubuntu3 amd64 minimal subset of the Python language (default version)
ii python-ntdb 1.0-2ubuntu1 amd64 Python bindings for NTDB
ii python-pam 0.4.2-13.1ubuntu3 amd64 Python interface to the PAM library
ii python-pip-whl 1.5.4-1ubuntu3 all alternative Python package installer
ii python-renderpm 3.0-1build1 amd64 python low level render interface
ii python-reportlab-accel 3.0-1build1 amd64 C coded extension accelerator for the ReportLab Toolkit
ii python-requests-whl 2.2.1-1ubuntu0.3 all elegant and simple HTTP library for Python, built for human beings
ii python-setuptools-whl 3.3-1ubuntu2 all Python Distutils Enhancements (wheel package)
ii python-six-whl 1.5.2-1ubuntu1 all Python 2 and 3 compatibility library (universal wheel)
rc python-support 1.0.15 all automated rebuilding support for Python modules
ii python-talloc 2.1.0-1 amd64 hierarchical pool based memory allocator - Python bindings
ii python-tdb 1.2.12-1 amd64 Python bindings for TDB
ii python-twisted-bin 13.2.0-1ubuntu1 amd64 Event-based framework for internet applications
rc python-twisted-core 13.2.0-1ubuntu1 all Event-based framework for internet applications
rc python-ubuntu-sso-client 13.10-0ubuntu6 all Ubuntu Single Sign-On client - Python library
ii python-urllib3-whl 1.7.1-1ubuntu3 all HTTP library with thread-safe connection pooling
ii python2.7 2.7.6-8ubuntu0.2 amd64 Interactive high-level object-oriented language (version 2.7)
ii python2.7-minimal 2.7.6-8ubuntu0.2 amd64 Minimal subset of the Python language (version 2.7)
ii python3 3.4.0-0ubuntu2 amd64 interactive high-level object-oriented language (default python3 version)
ii python3-apport 2.14.1-0ubuntu3.11 all Python 3 library for Apport crash report handling
ii python3-apt 0.9.3.5ubuntu1 amd64 Python 3 interface to libapt-pkg
ii python3-aptdaemon 1.1.1-1ubuntu5.2 all Python 3 module for the server and client of aptdaemon
ii python3-chardet 2.2.1-2~ubuntu1 all universal character encoding detector for Python3
ii python3-colorama 0.2.5-0.1ubuntu2 all Cross-platform colored terminal text in Python - Python 3.x
ii python3-commandnotfound 0.3ubuntu12 all Python 3 bindings for command-not-found.
ii python3-dbus 1.2.0-2build2 amd64 simple interprocess messaging system (Python 3 interface)
ii python3-dbus.mainloop.qt 4.10.4+dfsg-1ubuntu1 amd64 D-Bus Support for PyQt4 with Python 3
ii python3-debian 0.1.21+nmu2ubuntu2 all Python 3 modules to work with Debian-related data formats
ii python3-defer 1.0.6-2build1 all Small framework for asynchronous programming (Python 3)
ii python3-dev 3.4.0-0ubuntu2 amd64 header files and a static library for Python (default)
ii python3-distlib 0.1.8-1ubuntu1 all low-level components of python distutils2/packaging
ii python3-distupgrade 1:0.220.7 all manage release upgrades
ii python3-gdbm:amd64 3.4.0-0ubuntu1 amd64 GNU dbm database support for Python 3.x
ii python3-gi 3.12.0-1ubuntu1 amd64 Python 3 bindings for gobject-introspection libraries
ii python3-html5lib 0.999-3~ubuntu1 all HTML parser/tokenizer based on the WHATWG HTML5 specification (Python 3)
ii python3-minimal 3.4.0-0ubuntu2 amd64 minimal subset of the Python language (default python3 version)
ii python3-pip 1.5.4-1ubuntu3 all alternative Python package installer - Python 3 version of the package
ii python3-pkg-resources 3.3-1ubuntu2 all Package Discovery and Resource Access using pkg_resources
ii python3-problem-report 2.14.1-0ubuntu3.11 all Python 3 library to handle problem reports
ii python3-pycurl 7.19.3-0ubuntu3 amd64 Python 3 bindings to libcurl
ii python3-pykde4 4:4.13.3-0ubuntu0.1 amd64 Python 3 bindings for the KDE Development Platform
ii python3-pyqt4 4.10.4+dfsg-1ubuntu1 amd64 Python3 bindings for Qt4
ii python3-requests 2.2.1-1ubuntu0.3 all elegant and simple HTTP library for Python3, built for human beings
ii python3-setuptools 3.3-1ubuntu2 all Python3 Distutils Enhancements
ii python3-sip 4.15.5-1build1 amd64 Python 3/C++ bindings generator runtime library
ii python3-six 1.5.2-1ubuntu1 all Python 2 and 3 compatibility library (Python 3 interface)
ii python3-software-properties 0.92.37.3 all manage the repositories that you install software from
ii python3-uno 1:4.2.8-0ubuntu2 amd64 Python-UNO bridge
ii python3-update-manager 1:0.196.13 all python 3.x module for update-manager
ii python3-urllib3 1.7.1-1ubuntu3 all HTTP library with thread-safe connection pooling for Python3
ii python3-wheel 0.24.0-1~ubuntu1 all built-package format for Python
ii python3-xkit 0.5.0ubuntu2 all library for the manipulation of xorg.conf files (Python 3)
ii python3.4 3.4.0-2ubuntu1.1 amd64 Interactive high-level object-oriented language (version 3.4)
ii python3.4-dev 3.4.0-2ubuntu1.1 amd64 Header files and a static library for Python (v3.4)
ii python3.4-minimal 3.4.0-2ubuntu1.1 amd64 Minimal subset of the Python language (version 3.4) $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.2 LTS
Release: 14.04
Codename: trusty $ grep -P '^[ \t]*[^#[ \t]+' /etc/apt/sources.list /etc/apt/sources.list.d/*.list
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty main restricted
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty main restricted
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates main restricted
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates main restricted
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty universe
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty universe
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates universe
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates universe
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty multiverse
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty multiverse
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates multiverse
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates multiverse
/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse
/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse
/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security main restricted
/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security main restricted
/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security universe
/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security universe
/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security multiverse
/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security multiverse
/etc/apt/sources.list:deb http://archive.canonical.com/ubuntu trusty partner
/etc/apt/sources.list:deb http://extras.ubuntu.com/ubuntu trusty main
/etc/apt/sources.list:deb http://cran.uni-muenster.de/bin/linux/ubuntu trusty/
/etc/apt/sources.list.d/fossfreedom-packagefixes-trusty.list:deb http://ppa.launchpad.net/fossfreedom/packagefixes/ubuntu trusty main
/etc/apt/sources.list.d/jitsi.list:deb http://download.jitsi.org/deb unstable/
/etc/apt/sources.list.d/leviatan1-ppa-trusty.list:deb http://ppa.launchpad.net/leviatan1/ppa/ubuntu trusty main $ whereis python
python: /usr/bin/python /usr/bin/python3.4-config /usr/bin/python3.4 /usr/bin/python3.4m /usr/bin/python2.7 /usr/bin/python3.4m-config /etc/python /etc/python3.4 /etc/python2.7 /usr/lib/python3.4 /usr/lib/python2.7 /usr/bin/X11/python /usr/bin/X11/python3.4-config /usr/bin/X11/python3.4 /usr/bin/X11/python3.4m /usr/bin/X11/python2.7 /usr/bin/X11/python3.4m-config /usr/local/lib/python3.4 /usr/local/lib/python2.7 /usr/include/python3.4 /usr/include/python3.4m /usr/share/python /usr/share/man/man1/python.1.gz $ whereis python2.7
python2: /usr/bin/python2.7 /usr/bin/python2 /etc/python2.7 /usr/lib/python2.7 /usr/bin/X11/python2.7 /usr/bin/X11/python2 /usr/local/lib/python2.7 /usr/share/man/man1/python2.1.gz | You have installed Python packages that are more recent than what your distribution provides. For example, you have python version 2.7.10-1 installed but your distribution only has version 2.7.5-5ubuntu3. APT doesn't downgrade packages unless explicitly told to do so. So for example if you try to install a package that depends on the exact version of Python, it won't work, because the python package can't be downgraded. Even apt-get --reinstall install python fails because APT won't downgrade Python to 2.7.5. In order to repair your system, you need to allow APT to perform downgrades. To do that, define APT preferences . Create a file /etc/apt/preferences.d/allow-downgrade containing Package: *
Pin: release o=Ubuntu
Pin-Priority: 1001 The files in /etc/apt/preferences.d (plus /etc/apt/preferences ) contain priority declarations that override the default selection when multiple versions of a package are available, which is “prefer the latest version from the target distribution”. Giving a package a priority over 1000 causes it to be preferred even if it's an older version that a package with a lower priority. Installed packages have priority 500 so the package from Ubuntu wins. For more information see: man apt_preferences I think once you've set these priorities you can run apt-get update
apt-get upgrade to downgrade all your packages to the version in Ubuntu (packages not in Ubuntu won't be removed). Also run apt-get -f install and don't try to install any other software until this completes successfully. Once everything is downgraded, remove the preferences file and run apt-get update again. | {
"source": [
"https://unix.stackexchange.com/questions/218911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122989/"
]
} |
219,038 | > cd /tmp
> ln -s foo
> ls -alhF /tmp
lrwxrwxrwx 1 user user 3 Jul 29 14:00 foo -> foo Is this a bug in ln or is there a use case for symlinking a file to itself? This is with coreutils 8.21-1ubuntu5.1 . | It's not a bug. The use case is for when you want to link a file to the same basename but in a different directory: cd /tmp
ln -s /etc/passwd
ls -l passwd
lrwxrwxrwx 1 xxx xxx 11 Jul 29 09:10 passwd -> /etc/passwd It's true that when you do this with a filename that is in the same directory it creates a link to itself which does not do a whole lot of good! This works regardless of whether you use symlinks or hard links. | {
"source": [
"https://unix.stackexchange.com/questions/219038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96718/"
]
} |
219,260 | I have a computer that I need to boot into, but the passwords seem to be bogus. Additionally I can't mount the drive for writing, and it is a mips processor, so I can't stick it in another machine to run it. Anyhow, they passwd file has some users that look like this, with a star after the user-name. does that mean blank password or what? root:8sh9JBUR0VYeQ:0:0:Super-User,,,,,,,:/:/bin/ksh
sysadm:*:0:0:System V Administration:/usr/admin:/bin/sh
diag:*:0:996:Hardware Diagnostics:/usr/diags:/bin/csh
daemon:*:1:1:daemons:/:/dev/null
bin:*:2:2:System Tools Owner:/bin:/dev/null
uucp:*:3:5:UUCP Owner:/usr/lib/uucp:/bin/csh
sys:*:4:0:System Activity Owner:/var/adm:/bin/sh
adm:*:5:3:Accounting Files Owner:/var/adm:/bin/sh
lp:VvHUV8idZH1uM:9:9:Print Spooler Owner:/var/spool/lp:/bin/sh
nuucp::10:10:Remote UUCP User:/var/spool/uucppublic:/usr/lib/uucp/uucico
auditor:*:11:0:Audit Activity Owner:/auditor:/bin/sh
dbadmin:*:12:0:Security Database Owner:/dbadmin:/bin/sh
rfindd:*:66:1:Rfind Daemon and Fsdump:/var/rfindd:/bin/sh | You have to check man passwd : If the encrypted password is set to an asterisk (*), the user will be
unable to login using login(1), but may still login using rlogin(1),
run existing processes and initiate new ones through rsh(1), cron(8),
at(1), or mail filters, etc. Trying to lock an account by simply
changing the shell field yields the same result and additionally
allows the use of su(1). Usually accounts with * in password field don't have a password e.g: disabled for login. This is different to account without password which means the password field will be empty and which is nearly always a bad practice. | {
"source": [
"https://unix.stackexchange.com/questions/219260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
219,268 | Why does the following command not insert new lines in the generated file and what's the solution? $ echo "Line 1\r\nLine2" >> readme.txt
$ cat readme.txt
Line 1\r\nLine2 | echo An echo implementation which strictly conforms to the Single Unix Specification will add newlines if you do: echo 'line1\nline2' But that is not a reliable behavior. In fact, there really isn't any standard behavior which you can expect of echo . OPERANDS string A string to be written to standard output. If the first operand is -n , or if any of the operands contain a < \ backslash> character, the results are implementation-defined . On XSI-conformant systems, if the first operand is -n , it shall be treated as a string, not an option . The following character sequences shall be recognized on XSI-conformant systems within any of the arguments: \a - Write an <alert> . \b - Write a <backspace> . \c - Suppress the <newline> that otherwise follows the final argument in the output. All characters following the \c in the arguments shall be ignored. \f - Write a <form-feed> . \n - Write a <newline> . \r - Write a <carriage-return> . \t - Write a <tab> . \v - Write a <vertical-tab> . \\ - Write a <backslash> character. \0num - Write an 8-bit value that is the zero, one, two, or three-digit octal number num . And so there really isn't any general way to know how to write a newline with echo , except that you can generally rely on just doing echo to do so. A bash shell typically does not conform to the specification, and handles the -n and other options, but even that is uncertain. You can do: shopt -s xpg_echo
echo hey\\nthere hey
there And not even that is necessary if bash has been built with the build-time option... --enable-xpg-echo-default Make the echo builtin expand backslash-escaped characters by default, without requiring the -e option. This sets the default value of the xpg_echo shell option to on , which makes the Bash echo behave more like the version specified in the Single Unix Specification, version 3. See Bash Builtins , for a description of the escape sequences that echo recognizes. printf On the other hand, printf 's behavior is pretty tame in comparison. RATIONALE The printf utility was added to provide functionality that has historically been provided by echo . However, due to irreconcilable differences in the various versions of echo extant, the version has few special features, leaving those to this new printf utility, which is based on one in the Ninth Edition system. The EXTENDED DESCRIPTION section almost exactly matches the printf() function in the ISO C standard, although it is described in terms of the file format notation in XBD File Format Notation . It handles format strings which describe its arguments - which can be any number of things, but for strings are pretty much either %b yte strings or literal %s trings. Other than the %f ormats in the first argument, it behaves most like a %b yte string argument, except that it doesn't handle the \c escape. printf ^%b%s$ '\n' '\n' '\t' '\t' ^
\n$^ \t$ See Why is printf better than echo ? for more. echo() printf You might write your own standards conformant echo like... echo(){
printf '%b ' "$@\n\c"
} ...which should pretty much always do the right thing automatically. Actually, no... That prints a literal \n at the tail of the arguments if the last argument ends in an odd number of <backslashes> . But this doesn't: echo()
case ${IFS- } in
(\ *) printf %b\\n "$*";;
(*) IFS=\ $IFS
printf %b\\n "$*"
IFS=${IFS#?}
esac | {
"source": [
"https://unix.stackexchange.com/questions/219268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122146/"
]
} |
219,314 | I have a bash script that generates a report on the progress of some long-running jobs on the machine. Basically, this parent script loops through a list of child scripts (calling them all with source ). The child scripts are expected to set a couple of specific variables, which the parent script will then make use of. Today, I discovered a bug where, a variable set by the first child script accidentally got used by the second child script, causing incorrect output. Is there a clean way to prevent these types of bugs from happening? Basically, when I source a child script, there are a couple of specific variables that I want to persist back to the parent script. My parent script resets these specific variables before it source s each new child script, so there are no issues with them. However, some child scripts may have additional arbitrary variables that it uses locally that should not persist back to the parent script. Obviously I could manually unset each of these at the end of the child script, but these seems prone to error if I forget one. Is there a more proper way of sourcing a script, and having only certain variables persist to the script that called source ? edit: For sake of clarity, here's a sort of dumbed down version of my parent script: echo "<html><body><h1>My jobs</h1>"
FILES=~/confs/*.sh
for f in $FILES; do
# reset variables
name="Unnamed job"
secsSinceActive="Unknown"
statusText="Unknown"
# run the script that checks on the job
source "$f"
# print bit of report
echo "<h3>$name</h3>"
echo "<p>Last active: $secsSinceActive seconds ago</p>"
echo "<p>Status: $statusText</p>"
echo "</body></html>" And here's what one of the child scripts might look like: name="My awesome job"
nowTime=`expr $(date +%s) `
lastActiveTime=`expr $(date +%s -r ~/blah.log)`
secsSinceActive=`expr $nowTime - $lastActiveTime`
currentRaw=$(cat ~/blah.log | grep "Progress" | tail -n 1)
if [ -z "$currentRaw" ]; then
statusText="Not running"
else
statusText="In progress"
fi The variables $name, $secsSinceActive, and $statusText need to persist back to the parent script, but all the other variables should disappear when the child script terminates. | Wrap the whole script you want to source into a function, add local before the declarations you want to only use in the function, and call the function at the end of the script. func () {
local name="My awesome job"
nowTime=`expr $(date +%s) `
lastActiveTime=`expr $(date +%s -r ~/blah.log)`
local secsSinceActive=`expr $nowTime - $lastActiveTime`
currentRaw=$(cat ~/blah.log | grep "Progress" | tail -n 1)
if [ -z "$currentRaw" ]; then
local statusText="Not running"
else
local statusText="In progress"
fi
}
func | {
"source": [
"https://unix.stackexchange.com/questions/219314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111223/"
]
} |
219,859 | File: Data inserted into table. Total count 13
No error occurred
Data inserted into table. Total count 45
No error occurred
Data inserted into table. Total count 14
No error occurred
Data inserted into table. Total count 90
No error occurred Expected output file: Data inserted into table. Total count 13
Data inserted into table. Total count 45
Data inserted into table. Total count 14
Data inserted into table. Total count 90 I want the output to look this way: every second line will be deleted but there will be no gap between lines. | With sed : sed -e n\;d <file With POSIX awk : awk 'FNR%2' <file If you have older awk (like oawk ), you need: oawk 'NR%2 == 1' <file With ex : $ ex file <<\EX
:g/$/+d
:wq!
EX will edit the file in-place. g mark a global command /$/ match every lines +d delete the next line wq! save all changes This approach share the same ideal with sed approach, delete every next line of current line start from line 1. With perl : perl -ne 'print if $. % 2' <file and raku : raku -ne '.say if $*IN.ins % 2' <file
raku -ne '.say if ++$ % 2' <file Edit Raku IO::Handle.ins was removed in this commit . | {
"source": [
"https://unix.stackexchange.com/questions/219859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
220,362 | I'm in the process of installing postgresql onto a second server Previously I installed postgresql and then used the supplied script ./contrib/start-scripts/linux Placed into the correct dir # cp ./contrib/start-scripts/linux /etc/rc.d/init.d/postgresql92
# chmod 755 /etc/rc.d/init.d/postgresql92 Which I could then execute as expected with # service postgresql92 start However the new machine is using Systemd and it looks like there is a completely different way to do this I don't want to hack at this and ruin something so I was wondering if anyone out there could point me in the right direction of how to achieve the same result | When installing from source, you will need to add a systemd unit file that works with the source install. For RHEL, Fedora my unit file looks like: /usr/lib/systemd/system/postgresql.service [Unit]
Description=PostgreSQL database server
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
# ... but allow it still to be effective for child processes
# (note that these settings are ignored by Postgres releases before 9.5)
Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0
# Maximum number of seconds pg_ctl will wait for postgres to start. Note that
# PGSTARTTIMEOUT should be less than TimeoutSec value.
Environment=PGSTARTTIMEOUT=270
Environment=PGDATA=/usr/local/pgsql/data
ExecStart=/usr/local/pgsql/bin/pg_ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT}
ExecStop=/usr/local/pgsql/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/local/pgsql/bin/pg_ctl reload -D ${PGDATA} -s
# Give a reasonable amount of time for the server to start up/shut down.
# Ideally, the timeout for starting PostgreSQL server should be handled more
# nicely by pg_ctl in ExecStart, so keep its timeout smaller than this value.
TimeoutSec=300
[Install]
WantedBy=multi-user.target Then enable the service on startup and start the PostgreSQL service: $ sudo systemctl daemon-reload # load the updated service file from disk
$ sudo systemctl enable postgresql
$ sudo systemctl start postgresql | {
"source": [
"https://unix.stackexchange.com/questions/220362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
220,380 | I'm trying to use OpenConnect to connect to my company's Cisco VPN (AnyConnect) The connection seems to work just fine, what I'm not understanding is how to set up routing. I'm doing this from the command line. I use the default VPN script to connect like this: openconnect -u MyUserName --script path_to_vpnc_script myvpngateway.example.com I type in my password, and I'm connected fine, but my default route has changed to force all traffic down the VPN link, whereas I just want company traffic down the VPN link. Are there some variables that I need to be putting into the vpnc-script? It's not very clear how this is done. | This answer is as follows: Use the following bash wrapper script to call the vpnc-script. In the wrapper script, the routes to be used for the VPN connection can be specified via a ROUTES variable. #!/bin/bash
#
# Routes that we want to be used by the VPN link
ROUTES="162.73.0.0/16"
# Helpers to create dotted-quad netmask strings.
MASKS[1]="128.0.0.0"
MASKS[2]="192.0.0.0"
MASKS[3]="224.0.0.0"
MASKS[4]="240.0.0.0"
MASKS[5]="248.0.0.0"
MASKS[6]="252.0.0.0"
MASKS[7]="254.0.0.0"
MASKS[8]="255.0.0.0"
MASKS[9]="255.128.0.0"
MASKS[10]="255.192.0.0"
MASKS[11]="255.224.0.0"
MASKS[12]="255.240.0.0"
MASKS[13]="255.248.0.0"
MASKS[14]="255.252.0.0"
MASKS[15]="255.254.0.0"
MASKS[16]="255.255.0.0"
MASKS[17]="255.255.128.0"
MASKS[18]="255.255.192.0"
MASKS[19]="255.255.224.0"
MASKS[20]="255.255.240.0"
MASKS[21]="255.255.248.0"
MASKS[22]="255.255.252.0"
MASKS[23]="255.255.254.0"
MASKS[24]="255.255.255.0"
MASKS[25]="255.255.255.128"
MASKS[26]="255.255.255.192"
MASKS[27]="255.255.255.224"
MASKS[28]="255.255.255.240"
MASKS[29]="255.255.255.248"
MASKS[30]="255.255.255.252"
MASKS[31]="255.255.255.254"
export CISCO_SPLIT_INC=0
# Create environment variables that vpnc-script uses to configure network
function addroute()
{
local ROUTE="$1"
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_ADDR=${ROUTE%%/*}
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASKLEN=${ROUTE##*/}
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASK=${MASKS[${ROUTE##*/}]}
export CISCO_SPLIT_INC=$((${CISCO_SPLIT_INC}+1))
}
# Old function for generating NetworkManager 0.8 GConf keys
function translateroute ()
{
local IPADDR="${1%%/*}"
local MASKLEN="${1##*/}"
local OCTET1="$(echo $IPADDR | cut -f1 -d.)"
local OCTET2="$(echo $IPADDR | cut -f2 -d.)"
local OCTET3="$(echo $IPADDR | cut -f3 -d.)"
local OCTET4="$(echo $IPADDR | cut -f4 -d.)"
local NUMADDR=$(($OCTET1*16581375 + $OCTET2*65536 + $OCTET3*256 + $OCTET4))
local NUMADDR=$(($OCTET4*16581375 + $OCTET3*65536 + $OCTET2*256 + $OCTET1))
if [ "$ROUTESKEY" = "" ]; then
ROUTESKEY="$NUMADDR,$MASKLEN,0,0"
else
ROUTESKEY="$ROUTESKEY,$NUMADDR,$MASKLEN,0,0"
fi
}
if [ "$reason" = "make-nm-config" ]; then
echo "Put the following into the [ipv4] section in your NetworkManager config:"
echo "method=auto"
COUNT=1
for r in $ROUTES; do
echo "routes${COUNT}=${r%%/*};${r##*/};0.0.0.0;0;"
COUNT=$(($COUNT+1))
done
exit 0
fi
for r in $ROUTES; do
addroute $r
done
exec /etc/openconnect/vpnc-script Then connect as follows: openconnect -u myusername --script wrapper-script -b vpngateway.example.com | {
"source": [
"https://unix.stackexchange.com/questions/220380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90367/"
]
} |
220,447 | I have a Makefile target, in which I have to check the value of an environment variable. But, I don't get the exact syntax of it. Tried hard, but can't find it. Any help is appreciated. Environment variable name: TEST, its value: "TRUE" test_target:
ifeq ($(TEST),"TRUE")
echo "Do something"
endif I get the following error: /bin/sh: -c: line 0: syntax error near unexpected token `"TRUE","TRUE"'
/bin/sh: -c: line 0: `ifeq ("TRUE","TRUE")' | The ifeq() directive has to be in column 1, remove any leading whitespace ie test_target:
ifeq ($(TEST),"TRUE")
echo "Do something"
endif ^ no whitespace | {
"source": [
"https://unix.stackexchange.com/questions/220447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118741/"
]
} |
220,503 | I have a set of packages (*.rpm). For each package I can do rpm -qRp <package> to list requires, but I would like to install them (those requires) without installing the packages themselves. The requires all live in enabled repositories. Is there some easy way to do this without writing my own script that would parse output of rpm -qRp ... for example. I know I could do it by installing everything with requires ( yum localinstall ) and then uninstalling the original packages, but the problem is that my set contains packages with both dependencies and conflicts in between them. The required packages however don't conflict. I would have to do multiple yum localinstall <list> followed by yum remove <list> and make sure the packages in list don't conflict. I there a better way? I would basically like something like yum-builddep , but for requires, not buildrequires. My distros are Fedora / RHEL | You can use the yum deplist command to generate a list of package dependencies: $ yum deplist bind
dependency: /bin/bash
provider: bash.x86_64 4.3.39-5.fc21
dependency: /bin/sh
provider: bash.x86_64 4.3.39-5.fc21
dependency: bind-libs(x86-64) = 32:9.9.6-10.P1.fc21
provider: bind-libs.x86_64 32:9.9.6-10.P1.fc21
dependency: coreutils
provider: coreutils.x86_64 8.22-22.fc21
[...] Grab the provider: lines from this for a list of packages: $ yum deplist bind | awk '/provider:/ {print $2}' | sort -u
bash.x86_64
bind-libs.x86_64
coreutils.x86_64
glibc.i686
glibc.x86_64
grep.x86_64
krb5-libs.x86_64
libcap.x86_64
libcom_err.x86_64
libxml2.x86_64
openssl-libs.x86_64
shadow-utils.x86_64
systemd.x86_64
zlib.x86_64 Send this output to yum install to install the packages: $ yum deplist bind | awk '/provider:/ {print $2}' | sort -u |
xargs yum -y install | {
"source": [
"https://unix.stackexchange.com/questions/220503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27687/"
]
} |
220,588 | If I have below two dates: 2015-09-12,2015-08-13 And I need to get the number of days between them, I will use the below code: awk -F'[-,]' '{print 360*($4-$1)+30*($5-$2)+($6-$3)}' The output for this code will be -29 while actually the difference is 29 | You can define functions in awk like: awk -F'[-,]' '
function abs(v) {return v < 0 ? -v : v}
{print abs(360*($4-$1)+30*($5-$2)+($6-$3))}' Or: function abs(v) {v += 0; return v < 0 ? -v : v} For the returned value to be converted to its canonical form for both negative and positive numbers and strings to always be converted to numbers. Without it, abs($0) where the input record is 1e2 would yield 1e2 , while for -1e2 , it would yield -100 . | {
"source": [
"https://unix.stackexchange.com/questions/220588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
220,750 | The switch configured on the server (Centos 7) is configured as trunk for VLAN#115,2014.
I have loaded # lsmod | grep 8021q
# modprobe 8021q I would like to configure an IP address on the server using the VLAN#115
Performing the following configuration: ifcfg-em1 TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=em1
UUID=c0c4d851-d762-4301-8c20-d6128aee5261
DEVICE=em1
ONBOOT=yes ifcfg-em1.115 TYPE=Ethernet
BOOTPROTO=none
IPADDR=172.31.141.242
PREFIX=24
GATEWAY=172.31.141.1
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=em1.115
UUID=c0c4d851-d762-4301-8c20-d6128aee5261
DEVICE=em1.115
VLAN=yes
ONBOOT=yes I ended up being not able to restart the network service.
The error message appearing is : Failed to start LSB: Bring up/down networking. What am doing wrong ? | it seems that disabling NetworkManager did the trick :) systemctl stop NetworkManager
systemctl disable NetworkManager | {
"source": [
"https://unix.stackexchange.com/questions/220750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119354/"
]
} |
220,796 | I have a personal folder /a/b on the server with permission 700. I don't want others to list the contents in /a/b. The owner of /a is root. Now I need to open the full authorities of directory /a/b/c to all users. I changed the permission of /a/b/c to 777 but it is still inaccessible for others. | You can. You just have to set the executable bit on the /a/b directory. That will prevent being able to see anything in b , but you can still do everything if you go directly to a/b/c . % mkdir -p a/b/c
% chmod 711 a/b
% sudo chown root a/b
% ll a/b
ls: cannot open directory a/b: Permission denied
% touch a/b/c/this.txt
% ls a/b/c
this.txt Beware that while others cannot list the contents of /a/b , they can access files in that directory if they guess the name of the file. % echo hello | sudo tee a/b/f
% cat a/b/f
hello
% cat a/b/doesntexist
cat: a/b/doesntexist: No such file or directory So be sure to maintain proper permissions (no group/world) on all other files/directories within the b directory, as this will avoid this caveat. | {
"source": [
"https://unix.stackexchange.com/questions/220796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
220,852 | This question is very similar to this one: List of available services For my specific case, I'm wondering if there is a specific command to show the full list of services under Ubuntu. I did run a ls /etc/init.d and it does show a pretty comprehensive list, but some entries are missing. I did see apache2 , myslq , gdm , and a whole lot of others. But some of them are missing. One example is plexmediaserver (I've installed plex server recently and had some difficulties in finding the name of it's service) So to rephrase this question in as few words as possible: Is there a way to get the full list of possibilities of {x} for service {x} status Note: using Ubuntu 15.04 | Since Ubuntu has recently switched over to systemd, some services will be listed by upstart. service --status-all and others, by systemd systemctl -l --type service --all or as root systemctl -r --type service --all However software still using the init system will likely be listed in /etc/init.d Looking through all of those will yield most services registered on the system. There is a good summary on systemd over on the Arch wiki | {
"source": [
"https://unix.stackexchange.com/questions/220852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114416/"
]
} |
221,970 | This is merely just a vocabulary question, but which keeps turning around in my head. It comes from a practice exam from a LPIC preparation book. The correct answer according to the book is that ~/Documents is a relative directory because it is relative to the home directory. However, this book contains an honourable ratio of typos and mistakes so I cannot take for granted everything which is written there. Here I do not agree because for me ~ acts as a variable expanded by the shell into either the content of the $HOME variable or the current user home directory path (cf. man bash ), so the actual path is /home/myuser/Documents which is indeed an absolute directory. Even Wikipedia , for once, seems of no help to me on this topic (even if it seems to confirm that the book is wrong on this one): An absolute or full path points to the same location in a file system
regardless of the current working directory. To do that, it must
contain the root directory. By contrast, a relative path starts from some given working directory,
avoiding the need to provide the full absolute path. Here again, I do not agree: according to this definition, the path /opt/kde3/bin/../lib which does not depends of the current working directory should be an absolute one, however my current understanding of this matches the book's author making this path a relative one. A quick web-search is just adding to my frustration, according to Webster Dictionary : absolute path - A path relative to the root directory. Its first character must be the pathname separator. So $HOME/Documents , or even just $HOME would not be considered absolute directories? Or does this definition implies variable expansion? What about the shell's ~ character? Is there any reliable definition of relative vs. absolute directory I can find somewhere and am I wrong all of the way? | This is essentially a question about the definition of terms. So for your purposes, the answer is whatever LPIC wants. But we can come to some conclusions based on technical facts: If you passed '~/Documents' to a system call , it would look for a directory named exactly ~ in the current directory (and probably fail). So, by the notion of pathnames used by the kernel , this is a relative path — but that's not what we meant. ~ is syntax implemented by the shell (and other programs which imitate it for convenience) which expands it into a real pathname. To illustrate, ~/Documents is approximately the same thing as $HOME/Documents (again, shell syntax). Since $HOME should be an absolute path, the value of $HOME/Documents is also an absolute path. But the text $HOME/Documents or ~/Documents has to be expanded by the shell in order to become the path we mean. Thus if I wanted to be precise and consistent, I would say that ~/Documents is a fragment of shell-script which expands to an absolute path. | {
"source": [
"https://unix.stackexchange.com/questions/221970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53965/"
]
} |
222,054 | NOTE: If client devices ( computer B in this example) want to obtain
internet through the gateway computer, maybe they still need to configure
nameserver resolution. This is not explained here (a gateway does not necessarily serve internet). I am trying to understand the fundamentals of networks routing. So I am experimenting with my LAN (I don't need internet for now, just LAN communications). I know the network configuration matters are a rather complex thing, but I am just trying to make a computer (say A) to act as a gateway for another (say B) (both running Ubuntu Linux). I only need B to be capable to reach the router, that is only reachable for A. This is the case: Router for computer A --> 192.168.0.1
Computer A - eth0 --> 192.168.0.2
Computer A - eth1 --> 192.168.1.1
Computer B - eth0 --> 192.168.1.2 Computer A connects fine to router . Computer A and B connect fine (ping, SSH... etc) between them . Computer B can not reach the router for computer A. I was thinking that just adding on B Computer A as default gateway and activating IP Forwarding on A would make B to be able to reach the router for A: luis@ComputerB:~$ sudo route add default gw 192.168.1.1
luis@ComputerB:~$ sudo routel
target gateway source proto scope dev tbl
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
192.168.1.0 broadcast 192.168.1.2 kernel link eth0 local
192.168.1.2 local 192.168.1.2 kernel host eth0 local
192.168.1.255 broadcast 192.168.1.2 kernel link eth0 local
default 192.168.1.1 eth0
169.254.0.0 16 link eth0
192.168.1.0 24 192.168.1.2 kernel link eth0 And on Computer A (the intermediate gateway): root@ComputerA:~$ echo 1 > /proc/sys/net/ipv4/ip_forward Computer B can still ping computer A, but router for A does not answer: luis@ComputerB:~$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
^C (No ping response) Is this the correct procedure to make a computer running Linux to act as a gateway for another computer in a simple manner? | You are almost there you just need to make sure traffic gets back to B. Right now you have forwarded traffic from B to the outside world but A doesn't know how to get traffic back to B. You need A to keep some state about the connections going through it. To do this you will want to enable NAT . You already have step one which is allow forwarding. Then you need to add a few firewall rules using iptables : iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE This says: on the network address translation table, after we have figured out the routing of a packet on output eth0 (the external), replace the return address information with our own so the return packets come to us. Also, remember that we did this (like a lookup table that remembers this connection). iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT Allow packets that want to come from eth1 (the internal interface) to go out eth0 (the external interface). iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT Use that lookup table we had from before to see if the packet arriving on the external interface actually belongs to a connection that was already initiated from the internal. | {
"source": [
"https://unix.stackexchange.com/questions/222054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
222,121 | Input: ARCHIVE B1_NAME B2_NAME B3_NAME ELEMENT INFO_NAM WERT PROCID
-------- -------- -------- -------- -------- -------- ---- ------
15MinAvg AIRSS 33-GIS DMDMGIS1 I MvAvr15m 1123 CP
15MinAvg AIRSS 33-GIS DMDMGIS1 P MvAvr15m 2344 CP
15MinAvg AIRSS 33-GIS DMDMGIS1 Q MvAvr15m 4545 CP
15MinAvg AIRSS 33-GIS DMDMGIS2 I MvAvr15m 6576 CP
15MinAvg AIRSS 33-GIS DMDMGIS2 P MvAvr15m 4355 CP
15MinAvg AIRSS 33-GIS DMDMGIS2 Q MvAvr15m 6664 CP Output: ARCHIVE B1_NAME B2_NAME B3_NAME ELEMENT WERT
-------- -------- -------- -------- ------- ----
15MinAvg AIRSS 33-GIS DMDMGIS1 I 1123
15MinAvg AIRSS 33-GIS DMDMGIS1 P 2344
15MinAvg AIRSS 33-GIS DMDMGIS1 Q 4545
15MinAvg AIRSS 33-GIS DMDMGIS2 I 6576
15MinAvg AIRSS 33-GIS DMDMGIS2 P 4355
15MinAvg AIRSS 33-GIS DMDMGIS2 Q 6664 I want to delete the two columns INFO_NAM and PROCID from my input file. | This has been answered before elsewhere on Stack Overflow: delete a column with awk or sed Deleting columns from a file with awk or from command line on linux etc.. I believe awk is the best for that. awk '{print $1,$2,$3,$4,$5,$7}' file It is possible to use cut as well. cut -f1,2,3,4,5,7 file | {
"source": [
"https://unix.stackexchange.com/questions/222121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
222,218 | My file, PSS-A (Primary A)
PSS-B (Primary B)
PSS-C (Primary C)
PSS-D (Primary D)
PSS-E (Primary E)
PSS-F (Primary F)
PSS-G (Primary G)
PSS-H (Primary H)
PSS-I (Primary I)
SPARE (SPARE) Output file, 1> PSS-A (Primary A)
2> PSS-B (Primary B)
3> PSS-C (Primary C)
4> PSS-D (Primary D)
5> PSS-E (Primary E)
6> PSS-F (Primary F)
7> PSS-G (Primary G)
8> PSS-H (Primary H)
9> PSS-I (Primary I)
10> SPARE (SPARE) | If you want the same format that you have specified awk '{print NR "> " $s}' inputfile > outputfile otherwise, though not standard, most implementations of the cat command can print line numbers for you (numbers padded to width 6 and followed by TAB in at least the GNU, busybox, Solaris and FreeBSD implementations). cat -n inputfile > outputfile Or you can use grep -n (numbers followed by : ) with a regexp like ^ that matches any line: grep -n '^' inputfile > outputfile | {
"source": [
"https://unix.stackexchange.com/questions/222218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
222,221 | Ubuntu 14.04 on a desktop Source Drive: /dev/sda1: 5TB ext4 single drive volume Target Volume: /dev/mapper/archive-lvarchive: raid6 (mdadm) 18TB volume with lvm partition and ext4 There are roughly 15 million files to move, and some may be duplicates (I do not want to overwrite duplicates). Command used (from source directory) was: ls -U |xargs -i -t mv -n {} /mnt/archive/targetDir/{} This has been going on for a few days as expected, but I am getting the error in increasing frequency. When it started the target drive was about 70% full, now its about 90%. It used to be about 1/200 of the moves would state and error, now its about 1/5. None of the files are over 100Mb, most are around 100k Some info: $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb3 155G 5.5G 142G 4% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.9G 4.0K 3.9G 1% /dev
tmpfs 797M 2.9M 794M 1% /run
none 5.0M 4.0K 5.0M 1% /run/lock
none 3.9G 0 3.9G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb1 19G 78M 18G 1% /boot
/dev/mapper/archive-lvarchive 18T 15T 1.8T 90% /mnt/archive
/dev/sda1 4.6T 1.1T 3.3T 25% /mnt/tmp
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb3 10297344 222248 10075096 3% /
none 1019711 4 1019707 1% /sys/fs/cgroup
udev 1016768 500 1016268 1% /dev
tmpfs 1019711 1022 1018689 1% /run
none 1019711 5 1019706 1% /run/lock
none 1019711 1 1019710 1% /run/shm
none 1019711 2 1019709 1% /run/user
/dev/sdb1 4940000 582 4939418 1% /boot
/dev/mapper/archive-lvarchive 289966080 44899541 245066539 16% /mnt/archive
/dev/sda1 152621056 5391544 147229512 4% /mnt/tmp Here's my output: mv -n 747265521.pdf /mnt/archive/targetDir/747265521.pdf
mv -n 61078318.pdf /mnt/archive/targetDir/61078318.pdf
mv -n 709099107.pdf /mnt/archive/targetDir/709099107.pdf
mv -n 75286077.pdf /mnt/archive/targetDir/75286077.pdf
mv: cannot create regular file ‘/mnt/archive/targetDir/75286077.pdf’: No space left on device
mv -n 796522548.pdf /mnt/archive/targetDir/796522548.pdf
mv: cannot create regular file ‘/mnt/archive/targetDir/796522548.pdf’: No space left on device
mv -n 685163563.pdf /mnt/archive/targetDir/685163563.pdf
mv -n 701433025.pdf /mnt/archive/targetDir/701433025.pd I've found LOTS of postings on this error, but the prognosis doesn't fit. Such issues as "your drive is actually full" or "you've run out of inodes" or even "your /boot volume is full". Mostly, though, they deal with 3rd party software causing an issue because of how it handles the files, and they are all constant, meaning EVERY move fails. Thanks. EDIT:
here is a sample failed and succeeded file: FAILED (still on source drive) ls -lhs 702637545.pdf
16K -rw-rw-r-- 1 myUser myUser 16K Jul 24 20:52 702637545.pdf SUCCEEDED (On target volume) ls -lhs /mnt/archive/targetDir/704886680.pdf
104K -rw-rw-r-- 1 myUser myUser 103K Jul 25 01:22 /mnt/archive/targetDir/704886680.pdf Also, while not all files fail, a file which fails will ALWAYS fail. If I retry it over and over it is consistent. EDIT: Some additional commands per request by @mjturner $ ls -ld /mnt/archive/targetDir
drwxrwxr-x 2 myUser myUser 1064583168 Aug 10 05:07 /mnt/archive/targetDir
$ tune2fs -l /dev/mapper/archive-lvarchive
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name: <none>
Last mounted on: /mnt/archive
Filesystem UUID: af7e7b38-f12a-498b-b127-0ccd29459376
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 289966080
Block count: 4639456256
Reserved block count: 231972812
Free blocks: 1274786115
Free inodes: 256343444
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
RAID stride: 128
RAID stripe width: 512
Flex block group size: 16
Filesystem created: Thu Jun 25 12:05:12 2015
Last mount time: Mon Aug 3 18:49:29 2015
Last write time: Mon Aug 3 18:49:29 2015
Mount count: 8
Maximum mount count: -1
Last checked: Thu Jun 25 12:05:12 2015
Check interval: 0 (<none>)
Lifetime writes: 24 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 3ea3edc4-7638-45cd-8db8-36ab3669e868
Journal backup: inode blocks
$ tune2fs -l /dev/sda1
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name: <none>
Last mounted on: /mnt/tmp
Filesystem UUID: 10df1bea-64fc-468e-8ea0-10f3a4cb9a79
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 152621056
Block count: 1220942336
Reserved block count: 61047116
Free blocks: 367343926
Free inodes: 135953194
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 732
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 4096
Inode blocks per group: 256
Flex block group size: 16
Filesystem created: Thu Jul 23 13:54:13 2015
Last mount time: Tue Aug 4 04:35:06 2015
Last write time: Tue Aug 4 04:35:06 2015
Mount count: 3
Maximum mount count: -1
Last checked: Thu Jul 23 13:54:13 2015
Check interval: 0 (<none>)
Lifetime writes: 150 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: a266fec5-bc86-402b-9fa0-61e2ad9b5b50
Journal backup: inode blocks | Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem. Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution. (get a really good backup made of the filesystem)
(unmount the filesystem)
tune2fs -O ^dir_index /dev/foo
e2fsck -fDvy /dev/foo
(mount the filesystem) ext4: Mysterious “No space left on device”-errors ext4 has a feature called dir_index enabled by default, which is quite
susceptible to hash-collisions. ...... ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”. | {
"source": [
"https://unix.stackexchange.com/questions/222221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45877/"
]
} |
222,229 | I am not an experienced Linux user. I am trying to select column data from 2 separate files and write to a third file using awk. I have tried to paste the files together i.e. paste file1 file2 and then awk but the data is appended on the next line (alternating). The data looks like this: file1 HZ880 0.00
HAM86 1.13
HAM40 1.60 file2 HZ880 -31.816826 115.757963 35.8909 0.0170 -.0170
HAM86 -31.824923 115.761507 33.6108 0.0165 -.0165
HAM40 -31.828528 115.762380 38.8434 0.0163 -.0163 How do I create a new file with column2 (file1) and column4 (file2)? I have tried the following: paste ${LEV_IN1} ${LEV_IN2} | awk '{print $2,$4}' > ${TEMP2} where LEV_IN1 is file1 and LEV_IN2 is file2 What am I doing wrong? | Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem. Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution. (get a really good backup made of the filesystem)
(unmount the filesystem)
tune2fs -O ^dir_index /dev/foo
e2fsck -fDvy /dev/foo
(mount the filesystem) ext4: Mysterious “No space left on device”-errors ext4 has a feature called dir_index enabled by default, which is quite
susceptible to hash-collisions. ...... ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”. | {
"source": [
"https://unix.stackexchange.com/questions/222229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/127951/"
]
} |
222,359 | I wrote a function that checks for a corrupted archive using a CRC checksum. To test it, I just opened the archive and scrambled the content with a hex editor. The problem is that I do not believe that this is the correct way to generate a corrupted file. Is there any other way to create a "controlled corruption", so it won't be totally random but can simulate what happens with real corrupted archives? I never had to corrupt something on purpose so I am not really sure how to do so, beside the random scrambling of data in a file. | I haven't done much fuzz testing either, but here's two ideas: Write some zeroes into the middle of the file. Use dd with conv=notrunc . This writes a single byte (block-size=1 count=1): dd if=/dev/zero of=file_to_fuzz.zip bs=1 count=1 seek=N conv=notrunc Using /dev/urandom as a source is also an option. Alternatively, punch multiple-of-4k holes with fallocate --punch-hole . You could even fallocate --collapse-range to cut out a page without leaving a zero-filled hole. (This will change the file size). A download resumed at the wrong place would match the --collapse-range scenario. An incomplete torrent will match the punch-hole scenario. (Sparse file or pre-allocated extents, either read as zero anywhere that hasn't been written yet.) Bad RAM (in the system you downloaded the file from) can cause corruption, and optical drives can also corrupt files (their ECC isn't always strong enough to recover perfectly from scratches or fading of the dye). DVD sectors (ECC blocks) are 2048B , but single byte or even single-bit errors can happen. Some drives will probably give you the bad uncorrectable data instead of a read-error for the sector, especially if you read in raw mode, or w/e it's called. | {
"source": [
"https://unix.stackexchange.com/questions/222359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119523/"
]
} |
222,372 | I know that rm -f file1 will forcefully remove file1 without prompting me. I also know that rm -i file1 will first prompt me before removing file1 Now if you execute rm -if file1 , this will also forcefully remove file1 without prompting me. However, if you execute rm -fi file1 , it will prompt me before removing file1 . So is it true that when combining command options, the last one will take precedence ? like rm -if , then -f will take precedence, but rm -fi then the -i will take precedence. The ls command for example, it doesn't matter if you said ls -latR or ls -Rtal . So I guess it only matters when you have contradictory command options like rm -if , is that correct? | When using rm with both -i and -f options, the first one will be ignored. This is documented in the POSIX standard: -f
Do not prompt for confirmation. Do not write diagnostic messages or modify
the exit status in the case of nonexistent operands. Any previous
occurrences of the -i option shall be ignored.
-i
Prompt for confirmation as described previously. Any previous occurrences
of the -f option shall be ignored. and also in GNU info page: ‘-f’
‘--force’
Ignore nonexistent files and missing operands, and never prompt the user.
Ignore any previous --interactive (-i) option.
‘-i’
Prompt whether to remove each file. If the response is not affirmative, the
file is skipped. Ignore any previous --force (-f) option. Let's see what happens under the hood: rm processes its option with getopt(3) , specifically getopt_long . This function will process the option arguments in the command line ( **argv ) in order of appearance: If getopt() is called repeatedly, it returns successively each of the option characters from each of the option elements. This function is typically called in a loop until all options are processed. From this functions perspective, the options are processed in order. What actually happens, however, is application dependent, as the application logic can choose to detect conflicting options, override them, or present an error. For the case of rm and the i and f options, they perfectly overwrite eachother. From rm.c : 234 case 'f':
235 x.interactive = RMI_NEVER;
236 x.ignore_missing_files = true;
237 prompt_once = false;
238 break;
239
240 case 'i':
241 x.interactive = RMI_ALWAYS;
242 x.ignore_missing_files = false;
243 prompt_once = false;
244 break; Both options set the same variables, and the state of these variables will be whichever option is last in the command line. The effect of this is inline with the POSIX standard and the rm documentation. | {
"source": [
"https://unix.stackexchange.com/questions/222372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100193/"
]
} |
222,394 | I've got a few, quite silly, non-technical questions about giving codenames to Debian releases. Each Debian release has its unique codename, which is (so far) a characters' name from Toy Story movies by Pixar . Here is list of all assigned codenames so far: release 1.1 is buzz (Buzz Lightyear) - the spaceman, release 1.2 is rex - the tyrannosaurus, release 1.3.x is bo (Bo Peep) - the girl who took care of the sheep, release 2.0 is hamm - the piggy bank, release 2.1 is slink (Slinky Dog) - the toy dog, release 2.2 is potato - Mr. Potato, release 3.0 is woody - the cowboy, release 3.1 is sarge - the sergeant of the Green Plastic Army Men, release 4.0 is etch - the toy blackboard (Etch-a-Sketch), release 5.0 is lenny - the toy binoculars, release 6.0 is squeeze - the name for the three-eyed aliens, release 7.0 is wheezy - the name of the rubber toy penguin with a red bow tie, release 8.0 is jessie - the name of the yodelling cowgirl, release 9.0 is stretch - a purple rubbery octopus toy at Sunnyside Daycare , release 10.0 is buster - Andy 's pet dachshund (currently stable ), release 11.0 is bullseye - Woody 's horse. List of upcoming major Debian releases' codenames after bullseye : release 12.0 is bookworm - an intelligent worm toy with a built-in flashlight (currently testing ), release 13.0 is trixie - a blue plastic Triceratops. There are also: special codename sid ( S till I n D evelopment ) which is symbolic link to codename which is currently unstable , stable which is symbolic link to codename which is currently stable, testing which is symbolic link to codename which is currently testing. The list of Toy Story characters is quite robust but at some time, there will be no more characters' names to assign. My questions are: What codenames will be assigned if we run out of characters' names? Who decides what is codename of next release (please don't answer ambiguously like: 'community' )? How many releases' names are planned ahead? BTW: Interesting quote from debian.org/doc/manuals : The decision of using Toy Story names was made by Bruce Perens who
was, at the time, the Debian Project Leader and was working also at Pixar , the company that produced the movies. Infographics by Claudio Ferreira Filho (@filhocf) ( license : CC BY-SA 4.0 ). | I'll answer your questions out of order: the release team chooses code names (see their task description ), two releases ahead; the next three releases are Bullseye (Debian 11), Bookworm (Debian 12), and Trixie (Debian 13); and I don't think we're worried about running out of names yet... As pointed out by eyoung100 , Buster is Andy's dog. As you mention in your updated question, Bullseye is Woody's horse. Bookworm is the intelligent, flashlight-wielding worm toy from Toy Story 3 . Trixie is Bonnie’s triceratops from Toy Story 3 . Also, Sid is the name of the next-door kid who breaks all his toys . "Still in development" is a backronym. | {
"source": [
"https://unix.stackexchange.com/questions/222394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28115/"
]
} |
222,440 | When using the command line, often it gets very cluttered. Making it inconvenient to examine past commands and their outputs for example. I would like to have a newline added each time before the command prompt is shown. Like so: <clutter>
<blank line>
name@machine:~$ I use the bash shell. How can this be achieved? | One way to achieve this is by modifying the .bashrc file.
Simply place the following at the end of the .bashrc file. PS1="\n$PS1" To explain how this works, PS1 is the variable containing what should be displayed as the prompt. All this is saying is "set PS1 to the previous contents of PS1 , with a newline character prepended". Putting it in .bashrc on most distros just makes bash run it every time you open an interactive shell (but not a login shell - see Difference between Login Shell and Non-Login Shell? ). | {
"source": [
"https://unix.stackexchange.com/questions/222440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85507/"
]
} |
222,473 | I have been trying for several days now (had to reinstall arch twice during), with setting up GPU passthrough on my pc without success. The hardware is Asus Z97-P Intel I5-4690 AMD Radeon R9 380 (catalyst sees it as R9 285) which should be capable of IOMMU. My computer runs Arch Linux. I have been following the following two articles on the topic: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF http://vfio.blogspot.hu/2015/05/vfio-gpu-how-to-series-part-3-host.html The Goal Unfortunately I only have one video card (and intel on-board) but I would be totally happy with starting the VM from the command line when I want to use Windows, otherwise I would like to just type startx to utilize the graphics card to the fglrx module. How I tried to achieve it I passed the intel_iommu=on option to initrd, which resulted in the following list using # find /sys/kernel/iommu_groups -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:14.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/4/devices/0000:00:1a.0
/sys/kernel/iommu_groups/5/devices/0000:00:1b.0
/sys/kernel/iommu_groups/6/devices/0000:00:1c.0
/sys/kernel/iommu_groups/6/devices/0000:00:1c.2
/sys/kernel/iommu_groups/6/devices/0000:00:1c.3
/sys/kernel/iommu_groups/6/devices/0000:03:00.0
/sys/kernel/iommu_groups/6/devices/0000:04:00.0
/sys/kernel/iommu_groups/7/devices/0000:00:1d.0
/sys/kernel/iommu_groups/8/devices/0000:00:1f.0
/sys/kernel/iommu_groups/8/devices/0000:00:1f.2
/sys/kernel/iommu_groups/8/devices/0000:00:1f.3 which might mean that IOMMU is enabled successfully, but according to arch wiki it might not have been setup correctly (see last line of code): #dmesg|grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 0x00000000DDB41D40 000080 (v01 INTEL BDW 00000001 INTL 00000001)
[ 0.000000] Intel-IOMMU: enabled
[ 0.024745] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap d2008c20660462 ecap f010da
[ 0.024747] IOAPIC id 8 under DRHD base 0xfed90000 IOMMU 0
[ 0.296873] DMAR: No ATSR found
[ 0.296964] IOMMU: dmar0 using Queued invalidation
[ 0.296965] IOMMU: Setting RMRR:
[ 0.296973] IOMMU: Setting identity map for device 0000:00:14.0 [0xdee7d000 - 0xdee8bfff]
[ 0.296996] IOMMU: Setting identity map for device 0000:00:1a.0 [0xdee7d000 - 0xdee8bfff]
[ 0.297012] IOMMU: Setting identity map for device 0000:00:1d.0 [0xdee7d000 - 0xdee8bfff]
[ 0.297024] IOMMU: Prepare 0-16MiB unity mapping for LPC
[ 0.297029] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[ 3.326568] AMD IOMMUv2 driver by Joerg Roedel <[email protected]>
[ 3.326569] AMD IOMMUv2 functionality not available on this system I have tried the other options mentioned on the arch wiki site ( pass pci-stub to MODULES in mkinitcpio.conf) , but this last line persisted. First question: Is the first command's output sufficient to say that my system correctly utilizes IOMMU? With a bit of distrust, I have arrived at the part where I had to bind my VGA to pci-stub, which have led to some reinstall already, so I would like to have at least some directions to go from here, whether to use pci-stub or VFIO and such. | One way to achieve this is by modifying the .bashrc file.
Simply place the following at the end of the .bashrc file. PS1="\n$PS1" To explain how this works, PS1 is the variable containing what should be displayed as the prompt. All this is saying is "set PS1 to the previous contents of PS1 , with a newline character prepended". Putting it in .bashrc on most distros just makes bash run it every time you open an interactive shell (but not a login shell - see Difference between Login Shell and Non-Login Shell? ). | {
"source": [
"https://unix.stackexchange.com/questions/222473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107849/"
]
} |
222,602 | I have a log file that was created by nobody : nogroup , which is activity being logged to, I wanted to emulate adding a message to that log file.
My first thought was to: $ sudo su nobody
This account is currently not available. | You have a way simpler solution, just run: su -s /bin/bash nobody (replace /bin/bash with the shell of your choice). The This account is currently not available. error is due to the fact that nobody user default shell is /usr/sbin/nologin , su -s force the system to use another shell. | {
"source": [
"https://unix.stackexchange.com/questions/222602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
222,709 | I have a file str.txt with the following sample records. 31,2713810299,1,11-Aug-15 19:52:10
32,2713810833,1,11-Aug-15 21:36:18 Now I want to print output with awk as below. cat str.txt|awk -F, '{print substr("$4",1,9)}' - The output should be: '11-Aug-15'
'11-Aug-15' | a single quote would be \x27 awk -F, '{print "\x27"substr($4,1,9)"\x27" }' | {
"source": [
"https://unix.stackexchange.com/questions/222709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125503/"
]
} |
222,944 | I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server? I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu. local $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty I changed the /etc/fuse.conf local $ sudo cat /etc/fuse.conf
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#mount_max = 1000
# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other And my user is in the fuse group local $ sudo grep fuse /etc/group
fuse:x:105:MY_LOACL_USERNAME And I mount the remote dir with (tried with/without combinations of sudo, default_permissions, allow_other): local $sudo sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ /mnt/LOCAL_DIR_NAME/ The REMOTE_USERNAME has write permissions to the dir/files (on the remote server). I tried the above command without sudo, default_permissions, and in all cases I get: local $ ls -al /mnt/LOCAL_DIR_NAME/a_file
-rw-rw-r-- 1 699 699 1513 Aug 12 16:08 /mnt/LOCAL_DIR_NAME/a_file
local $ test -w /mnt/LOCAL_DIR_NAME/a_file && echo "Writable" || echo "Not Writable"
Not Writable Clarification 0 In response to user3188445's comment: $ whoami
LOCAL_USER
$ cd
$ mkdir test_mnt
$ sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ test_mnt/
$ ls test_mnt/
I see the contents of the dir correctly
$ ls -al test_mnt/
total 216
drwxr-xr-x 1 699 699 4096 Aug 12 16:42 .
drwxr----- 58 LOCAL_USER LOCAL_USER 4096 Aug 17 15:46 ..
-rw-r--r-- 1 699 699 2557 Jul 30 16:48 sample_file
drwxr-xr-x 1 699 699 4096 Aug 11 17:25 sample_dir
$ touch test_mnt/new_file
touch: cannot touch ‘test_mnt/new_file’: Permission denied
# extra info: SSH to the remote host and check file permissions
$ ssh REMOTE_USERNAME@REMOTE_HOST
# on remote host
$ ls -al /remote/dir/path/
lrwxrwxrwx 1 root root 18 Jul 30 13:48 /remote/dir/path/ -> /srv/path/path/path/
$ cd /remote/dir/path/
$ ls -al
total 216
drwxr-xr-x 26 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 12 13:42 .
drwxr-xr-x 4 root root 4096 Jul 30 14:37 ..
-rw-r--r-- 1 REMOTE_USERNAME REMOTE_USERNAME 2557 Jul 30 13:48 sample_file
drwxr-xr-x 2 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 11 14:25 sample_dir | The question was answered in a linux mailing list ; I post a translated answer here for completeness. Solution The solution is to not use both of the options default_permissions and allow_other when mounting (which I didn't try in my original experiments). Explanation The problem seems to be quite simple. When you use the option default_permissions in fusermount then fuse's permission control of the fuse mount is handled by the kernel and not by fuse . This means that the REMOTE_USER's uid/gid aren't mapped to the LOCAL_USER (sshfs.c IDMAP_NONE). It works the same way as a simple nfs fs without mapping. So, it makes sense to prohibit the access, if the uid/gid numbers don't match. If you have the option allow_other then this dir is writable only by the local user with uid 699, if it exists. From fuse's man: 'default_permissions'
By default FUSE doesn't check file access permissions, the
filesystem is free to implement its access policy or leave it to
the underlying file access mechanism (e.g. in case of network
filesystems). This option enables permission checking, restricting
access based on file mode. It is usually useful together with the
'allow_other' mount option.
'allow_other'
This option overrides the security measure restricting file access
to the user mounting the filesystem. This option is by default only
allowed to root, but this restriction can be removed with a
(userspace) configuration option. | {
"source": [
"https://unix.stackexchange.com/questions/222944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128564/"
]
} |
222,974 | When I want to ask for a password in a bash script, I do that : read -s ...but when I run bash in POSIX mode, with sh , the -s option is rejected: $ read -s
sh: 1: read: Illegal option -s How do I securely ask for an input with a POSIX-compliant command ? | read_password() {
REPLY="$(
# always read from the tty even when redirected:
exec < /dev/tty || exit # || exit only needed for bash
# save current tty settings:
tty_settings=$(stty -g) || exit
# schedule restore of the settings on exit of that subshell
# or on receiving SIGINT or SIGTERM:
trap 'stty "$tty_settings"' EXIT INT TERM
# disable terminal local echo
stty -echo || exit
# prompt on tty
printf "Password: " > /dev/tty
# read password as one line, record exit status
IFS= read -r password; ret=$?
# display a newline to visually acknowledge the entered password
echo > /dev/tty
# return the password for $REPLY
printf '%s\n' "$password"
exit "$ret"
)"
} Note that for those shells (ksh88, mksh and most other pdksh-derived shells) where printf is not builtin, the password would appear in clear in the ps output (for a few microseconds) or may show up in some audit logs if all command invocations with their parameters are audited. In those shells however, you can replace it with print -r -- "$password" . In any case echo is generally not an option . Another POSIX-compliant one that doesn't involve revealing the password in the ps output (but might end up having it written onto permanent storage) is: cat << EOF
$password
EOF Also note that zsh's IFS= read -rs 'pass?Password: ' or bash's IFS= read -rsp 'Password: ' pass issue the Password: prompt on stderr. So with those, you might want to add a 2> /dev/tty to make sure the prompt goes to the controlling terminal. In any case, make sure you don't forget the IFS= and -r . | {
"source": [
"https://unix.stackexchange.com/questions/222974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128457/"
]
} |
223,182 | I tried following shell script which should replace spaces from all xml filenames for xml_file in $(find $1 -name "* .xml" -type f);
do
echo "removing spaces from XML file:" $xml_file
mv "$xml_file" "${xml_file// /_}";
done Suppose, I have xml file with the name xy z.xml , then it gives: removing spaces from XML file: /home/krishna/test/xy
mv: cannot stat `/home/krishna/test/xy': No such file or directory
removing spaces from XML file: .xml
mv: cannot stat `z.xml': No such file or directory | Use this with bash : find $1 -name "* *.xml" -type f -print0 | \
while read -d $'\0' f; do mv -v "$f" "${f// /_}"; done find will search for files with a space in the name. The filenames will be printed with a nullbyte ( -print0 ) as delimiter to also cope with special filenames.
Then the read builtin reads the filenames delimited by the nullbyte and finally mv replaces the spaces with an underscore. EDIT: If you want to remove the spaces in the directories too, it's a bit more complicated. The directories are renamed and then not anymore accessible by the name find finds. Try this: find -name "* *" -print0 | sort -rz | \
while read -d $'\0' f; do mv -v "$f" "$(dirname "$f")/$(basename "${f// /_}")"; done The sort -rz reverses the file order, so that the deepest files in a folder are the first to move and the folder itself will be the last one. So, there are never folders renamed before all files and folder are rename inside of it. The mv command in the loop is a bit changed too. In the target name, we only remove the spaces in the basename of the file, else it wouldn't be accessible. | {
"source": [
"https://unix.stackexchange.com/questions/223182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99822/"
]
} |
223,276 | So ssh has the option HostKeyAlgorithms . Sample usage: ssh -o "HostKeyAlgorithms ssh-rsa" user@hostname I'm trying to get the client to connect using the servers ecdsa key, but I can't find what the correct string is for that. What command can I use to get a list of the available HostKeyAlgorithms ? | ssh -Q key Unless you have an ancient version of OpenSSH, in which case uhhhh source dive, or run ssh -v -v -v ... and see if what you want appears there. | {
"source": [
"https://unix.stackexchange.com/questions/223276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125142/"
]
} |
223,385 | On 32-bit Linux systems, invoking this $ /lib/libc.so.6 and on 64-bit systems this $ /lib/x86_64-linux-gnu/libc.so.6 in a shell, provides an output like this: GNU C Library stable release version 2.10.1, by Roland McGrath et al.
Copyright (C) 2009 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.4.0 20090506 (Red Hat 4.4.0-4).
Compiled on a Linux >>2.6.18-128.4.1.el5<< system on 2009-08-19.
Available extensions:
The C stubs add-on version 2.1.2.
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
RT using linux kernel aio
For bug reporting instructions, please see:
<http://www.gnu.org/software/libc/bugs.html>. Why and how does this happen, and how is it possible to do the same in other shared libraries? I looked at /usr/lib to find executables, and I found /usr/lib/libvlc.so.5.5.0 . Running it led to a segmentation fault . :-/ | That library has a main() function or equivalent entry point, and was compiled in such a way that it is useful both as an executable and as a shared object. Here's one suggestion about how to do this, although it does not work for me. Here's another in an answer to a similar question on S.O , which I'll shamelessly plagiarize, tweak, and add a bit of explanation. First, source for our example library, test.c : #include <stdio.h>
void sayHello (char *tag) {
printf("%s: Hello!\n", tag);
}
int main (int argc, char *argv[]) {
sayHello(argv[0]);
return 0;
} Compile that: gcc -fPIC -pie -o libtest.so test.c -Wl,-E Here, we are compiling a shared library ( -fPIC ), but telling the linker that it's a regular executable ( -pie ), and to make its symbol table exportable ( -Wl,-E ), such that it can be usefully linked against. And, although file will say it's a shared object, it does work as an executable: > ./libtest.so
./libtest.so: Hello! Now we need to see if it can really be dynamically linked. An example program, program.c : #include <stdio.h>
extern void sayHello (char*);
int main (int argc, char *argv[]) {
puts("Test program.");
sayHello(argv[0]);
return 0;
} Using extern saves us having to create a header. Now compile that: gcc program.c -L. -ltest Before we can execute it, we need to add the path of libtest.so for the dynamic loader: export LD_LIBRARY_PATH=./ Now: > ./a.out
Test program.
./a.out: Hello! And ldd a.out will show the linkage to libtest.so . Note that I doubt this is how glibc is actually compiled, since it is probably not as portable as glibc itself (see man gcc with regard to the -fPIC and -pie switches), but it demonstrates the basic mechanism. For the real details you'd have to look at the source makefile. | {
"source": [
"https://unix.stackexchange.com/questions/223385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37799/"
]
} |
223,408 | I was slightly confused by: % vim tmp
zsh: suspended vim tmp
% kill %1
% jobs
[1] + suspended vim tmp
% kill -SIGINT %1
% jobs
[1] + suspended vim tmp
% kill -INT %1
% jobs
[1] + suspended vim tmp So I resigned to just "do it myself" and wonder why later: % fg
[1] - continued vim tmp
Vim: Caught deadly signal TERM
Vim: Finished.
zsh: terminated vim tmp
% Oh! Makes sense really, now that I think about it, that vim has to be running in order for it's signal handler to be told to quit, and to do so. But obviously not what I intended. Is there a way to "wake and quit" in a single command? i.e., a built-in alias for kill %N && fg %N ? Why does resuming in the background not work? If I bg instead of fg , Vim stays alive until I fg , which sort of breaks my above intuition. | vi-vi-vi is of the devil. You must kill it with fire. Or SIGKILL : kill -KILL %1 The builtin kill s are kind enough to send SIGCONT to suspended processes so that you don't have to do it yourself, but that won't help if the process blocks the signal you're sending or if handling the signal causes the processes to become suspended again (if a background process tries to read from the terminal, by default, it'll be sent SIGTTIN , which suspends the process if unhandled). | {
"source": [
"https://unix.stackexchange.com/questions/223408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62835/"
]
} |
223,503 | In my bash script I'm trying to print a line if a certain string does not exist in a file. if grep -q "$user2" /etc/passwd; then
echo "User does exist!!" This is how I wrote it if I wanted the string to exist in the file but how can I change this to make it print "user does not exist" if the user is not found in the /etc/passwd file? | grep will return success if it finds at least one instance of the pattern and failure if it does not. So you could either add an else clause if you want both "does" and "does not" prints, or you could just negate the if condition to only get failures. An example of each: if grep -q "$user2" /etc/passwd; then
echo "User does exist!!"
else
echo "User does not exist!!"
fi
if ! grep -q "$user2" /etc/passwd; then
echo "User does not exist!!"
fi | {
"source": [
"https://unix.stackexchange.com/questions/223503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128833/"
]
} |
223,746 | What are the contents of this monolithic code base? I understand processor architecture support, security, and virtualization, but I can't imagine that being more than 600,000 lines or so. What are the historic & current reason drivers are included in the kernel code base? Do those 15+ million lines include every single driver for every piece of hardware ever? If so, that then begs the question, why are drivers embedded in the kernel and not separate packages that are auto-detected and installed from hardware IDs? Is the size of the code base an issue for storage-constrained or memory-constrained devices? It seems it would bloat the kernel size for space-constrained ARM devices if all that was embedded. Are a lot of lines culled by the preprocessor? Call me crazy, but I can't imagine a machine needing that much logic to run what I understand is the roles of a kernel. Is there evidence that the size will be an issue in 50+ years due to it's seemingly ever-growing nature? Including drivers means it will grow as hardware is made. EDIT : For those thinking this is the nature of kernels, after some research I realized it isn't always. A kernel is not required to be this large, as Carnegie Mellon's microkernel Mach was listed as an example 'usually under 10,000 lines of code' | Drivers are maintained in-kernel so when a kernel change requires a global search-and-replace (or search-and-hand-modify) for all users of a function, it gets done by the person making the change. Having your driver updated by people making API changes is a very nice advantage, instead of having to do it yourself when it doesn't compile on a more recent kernel. The alternative (which is what happens for drivers maintained out-of-tree), is that the patch has to get re-synced by its maintainers to keep up with any changes. A quick search turned up a debate over in-tree vs. out-of-tree driver development. The way Linux is maintained is mostly by keeping everything in the mainline repo. Building of small stripped-down kernels is supported by config options to control #ifdef s. So you can absolutely build tiny stripped-down kernels which compile only a tiny part of the code in the whole repo. The extensive use of Linux in embedded systems has led to better support for leaving stuff out than Linux had years earlier when the kernel source tree was smaller. A super-minimal 4.0 kernel is probably smaller than a super-minimal 2.4.0 kernel. | {
"source": [
"https://unix.stackexchange.com/questions/223746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66928/"
]
} |
223,778 | How can I run an infinite loop in the background, while continuing on with the script's execution? Example "script": while true; do something_in_the_background; done
do_something_while_the_loop_goes_on_in_the_background
for 1 2 3; do somethingelse; done
exit 0 This (notice the & ) seems to crash the whole system after a short while: while true; do
something_in_the_background &
done
do_something_while_the_loop_goes_on_in_the_background
for 1 2 3; do somethingelse; done
exit 0 | With the & inside the loop it will start a new process in the background and as fast as it can do it again without waiting for the first process to end. Instead I think you want to put the loop into the background, so put the & on the loop itself like while /bin/true; do
something_in_the_background
done &
# more stuff | {
"source": [
"https://unix.stackexchange.com/questions/223778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129013/"
]
} |
223,897 | I have a html file. I want to remove all lines that do not start with <tr> . I tried: cat my_file | sed $'
s/^[^tr].*//
' | sed '/^$/d' but it deleted all the lines. | Try this with GNU sed: sed -n '/^<tr>/p' file or sed '/^<tr>/!d' file | {
"source": [
"https://unix.stackexchange.com/questions/223897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
223,902 | When I use a pendrive with two partitions on a Windows system, it only recognizes the first partition that I've created in that pendrive. I have a pendrive with two partitions: an ext4 and a ntfs (the one that should be recognized). So, the problem is that when I use this pendrive on Windows it tries to read my ext4 partition since it's the first one that I've created. I'm not sure if just changing the pendrive's name partition from sda2 to sda1 on linux could solve my problem on windows, but that is the only solution that I can think right now. | Try this with GNU sed: sed -n '/^<tr>/p' file or sed '/^<tr>/!d' file | {
"source": [
"https://unix.stackexchange.com/questions/223902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
224,098 | My colleague ran grep | crontab . After that all jobs disappeared. Looks like he was trying to run crontab -l . So what happened after running the command grep | crontab ? Can anyone explain? | crontab can install new crontab for the invoking user (or the mentioned user as root ) reading from STDIN. This is what happended in your case. grep without any option will generate an error message on STDERR as usual and you are piping the STDOUT of grep to STDIN of crontab which is blank hence your crontab will be gone. | {
"source": [
"https://unix.stackexchange.com/questions/224098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129224/"
]
} |
224,156 | I installed Debian Jessie with default partitioning on my SSD drive. My current disk partitioning looks like this: As I have 16GB of RAM, I assume I don't need swap . But since I have other disk drives I may create a swapfile for example, on one of the other drives instead. Can you tell me what steps I should take to remove the swap partition correctly and permanently for it not to occupy disk space ? I wish to delete the swap partition as I currently have only 128GB SSD. Here is what I tried and rebooted each time; each of these steps being not permanent , or did not do anything : Using the swapoff utility: swapoff --all Using the GParted utility: Right-clicking the swap partition and clicking Swapoff. Commenting out the swap partition's UUID in the following file: /etc/fstab Commenting out the swap partition's UUID in the following file: /etc/initramfs-tools/conf.d/resume Running these commands in the end (both in this and the opposite order): update-grub
update-initramfs -u | If you have GParted open, close it. Its Swapoff feature does not appear to to be permanent. Open terminal and become root ( su ); if you have sudo enabled, you may also do for example sudo -i ; see man sudo for all options): sudo -i Turn off the particular swap partition and / or all of the swap s: swapoff --all Make 100% sure the particular swap partition partition is off: cat /proc/swaps Open a text editor you are skilled in with this file, e.g. nano if unsure: nano /etc/fstab Comment out / remove the swap partition's UUID , e.g.: # UUID=1d3c29bb-d730-4ad0-a659-45b25f60c37d none swap sw 0 0 Open a text editor you are skilled in with this file, e.g. nano if unsure: nano /etc/initramfs-tools/conf.d/resume Comment out / remove the previously identified swap partition's UUID , e.g.: # RESUME=UUID=1d3c29bb-d730-4ad0-a659-45b25f60c37d Don't close the terminal as you will need it later anyway. Note: The next steps differ depending on, whether you rely on CLI or GUI . GUI : Open up GParted , either from menu, or more conveniently from the terminal we have opened: gparted If you don't have it installed, you may do so; afterwards run the previous command again: apt-get install gparted Choose your drive from top-right menu. As the GParted reactivates the swap partition upon launch, you will have to right-click the particular swap partition and click Swapoff -> This will be applied immediately. Delete the swap partition with right click -> Delete. You must apply the change now. Resize your main / other partition with right click -> Resize/Move. You must apply the change now. Back to the terminal, let's recreate the boot images : update-initramfs -u -k all Update GRUB : update-grub You may reboot now if you wish to test that the machine boots up. Encryption note: If your swap partition is encrypted, then you also need to comment out the related line in /etc/crypttab , otherwise CryptSetup will keep you waiting for 90 seconds during boot time. Thanks frank for this addition. CLI : I will check in VM s if my solution works, then I will share it. In the meantime, see this answer . | {
"source": [
"https://unix.stackexchange.com/questions/224156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
224,227 | I made an alias to save some keystrokes with working with systemd: $ alias sctl='systemctl' However, this breaks tab completion for the subcommands. Is it possible to alias a command without breaking tab completion? | First find out which complete-function is used for the systemctl command: complete | grep " systemctl$" The output looks like this: complete -F _functionname systemctl Then use: complete -F _functionname sctl To register the function for the completion of your alias. Now, when you type sctl <tab><tab> , the same suggestions as when you type systemctl will appear. | {
"source": [
"https://unix.stackexchange.com/questions/224227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53613/"
]
} |
224,240 | The only references to i915 I can find are indeed to the linux kernel driver for the intel chips. Intel just seems to call them HD graphics whatever. Intel 915 seems to refer to some Pentium 4 chipsets but they are unrelated to the current graphics architecture. | Well, that P4 chipset is the reason for the driver name. Starting with i810 , Intel outsourced the driver to Tungsten Graphics, but commissioned it as an open source one for Linux. The first 915 chipset was released in June 2004 and soon after 1 , a driver for this chipset was added to the linux kernel (see also 2.6.9-rc2 changelog). The driver name was, you guessed it, i915 : +#define DRIVER_AUTHOR "Tungsten Graphics, Inc."
+
+#define DRIVER_NAME "i915"
+#define DRIVER_DESC "Intel Graphics"
+#define DRIVER_DATE "20040405" This was consistent with previous names of drivers that supported various Intel graphics chipset families (e.g. i810 , i830 2 ). Later on, support for other chipset families (including HD Graphics) was added to the same driver, which makes that nowadays i915 supports a long list 3 of Intel graphics chipsets. 1: as you can see in this message from David Airlie to Linus Torvalds and Andrew Morton 2: in fact, i830 was replaced by i915 in 2.6.39, see also the initial patch linked in another message from David to Linus 3: that list from wikipedia wasn't updated to include Broadwell & Skylake chipsets | {
"source": [
"https://unix.stackexchange.com/questions/224240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126364/"
]
} |
224,277 | Background I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot: Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.iso Use of dd to do so (apparently the most widely used): http://www.linuxjournal.com/content/archiving-cds-iso-commandline dd if=/dev/cdrom bs=blocksize count=count of=/path/to/isoimage.iso Use of just pv to accomplish this: See man pv for more information, although here's an excerpt of it: Taking an image of a disk, skipping errors:
pv -EE /dev/sda > disk-image.img
Writing an image back to a disk:
pv disk-image.img > /dev/sda
Zeroing a disk:
pv < /dev/zero > /dev/sda I don't know if all of them should be equivalent, although I tested some of them (using the md5sum tool) and, at least, dd and pv are not equivalent. Here's the md5sum of both the drive and generated files using each procedure: md5 of dd procedure: 71b676875b0194495060b38f35237c3c md5 of pv procedure: f3524d81fdeeef962b01e1d86e6acc04 EDIT: That output was from another CD than the output given. In fact, I realized there are some interesting facts I provide as an answer. In fact, the size of each file is different comparing to each other. So, is there a best procedure to copy a CD/DVD or am I just using the commands incorrectly? More information about the situation Here is more information about the test case I'm using to check the procedures I've found so far: isoinfo -d i /dev/sr0 Output: https://gist.github.com/JBFWP286/7f50f069dc5d1593ba62#file-isoinfo-output-19-aug-2015 dd to copy the media, with output checksums and file information
Output: https://gist.github.com/JBFWP286/75decda0a67605590d32#file-dd-output-with-md5-and-sha256-19-aug-2015 pv to copy the media, with output checksums and file information
Output: https://gist.github.com/JBFWP286/700a13fe0a2f06ce5e7a#file-pv-output-with-md5-and-sha256-19-aug-2015 Any help will be appreciated! | All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso . cat /dev/sr0 >image.iso
cat </dev/sr0 >image.iso
tee </dev/sr0 >image.iso
dd </dev/sr0 >image.iso
dd if=/dev/cdrom of=image.iso
pv </dev/sr0 >image.iso
cp /dev/sr0 image.iso
tail -c +1 /dev/sr0 >image.iso Why would you use one over the other? Simplicity. For example, if you already know cat or cp , you don't need to learn yet another command. Robustness. This one is a bit of a variant of simplicity. How much risk is there that changing the command is going to change what it does? Let's see a few examples: Anything with redirection: you might accidentally put a redirection the wrong way round, or forget it. Since the destination is supposed to be a non-existing file, set -o noclobber should ensure that you don't overwrite anything; however you might overwrite a device if you accidentally write >/dev/sda (for a CD, which is read-only, there's no risk, of course). This speaks in favor of cat /dev/sr0 >image.iso (hard to get wrong in a damaging way) over alternatives such as tee </dev/sr0 >image.iso (if you invert the redirections or forget the input one, tee will write to /dev/sr0 ). cat : you might accidentally concatenate two files. That leaves the data easily salvageable. dd : i and o are close on the keyboard, and somewhat unusual. There's no equivalent of noclobber , of= will happily overwrite anything. The redirection syntax is less error-prone. cp : if you accidentally swap the source and the target, the device will be overwritten (again, assuming a non read-only device). If cp is invoked with some options such as -R or -a which some people add via an alias, it will copy the device node rather than the device content. Additional functionality. The one tool here that has useful additional functionality is pv , with its powerful reporting options. But here you can check how much has been copied by looking at the size of the output file anyway. Performance. This is an I/O-bound process; the main influence in performance is the buffer size: the tool reads a chunk from the source, writes the chunk to the destination, repeats. If the chunk is too small, the computer spends its time switching between tasks. If the chunk is too large, the read and write operations can't be parallelized. The optimal chunk size on a PC is typically around a few megabytes but this is obviously very dependent on the OS, on the hardware, and on what else the computer is doing. I made benchmarks for hard disk to hard disk copies a while ago, on Linux, which showed that for copies within the same disk, dd with a large buffer size has the advantage, but for cross-disk copies, cat won over any dd buffer size. There are a few reasons why you find dd mentioned so often. Apart from performance, they aren't particularly good reasons. In very old Unix systems, some text processing tools couldn't cope with binary data (they used null-terminated strings internally, so they tended to have problems with null bytes; some tools also assumed that characters used only 7 bits and didn't process 8-bit character sets properly). I'm not sure if this ever was a problem with cat (it was with more line-oriented tools such as head , sed , etc.), but people tended to avoid it on binary data because of its association with text processing. This is not a problem on modern systems such as Linux, OSX, *BSD, or anything that's POSIX-compliant. There's a sort of myth that dd is somewhat “lower level” than other tools such as cat and accesses devices directly. This is completely false: dd and cat and tee and the others all read bytes from their input and write the bytes to their output. The real magic is in /dev/sr0 . dd has an unusual command line syntax, so explaining how it works gives more of an opportunity to shine by explaining something that just writing cat /dev/sr0 . Using dd with a large buffer size can have better performance, but it is not always the case (see some benchmarks on Linux ). A major risk with dd is that it can silently skip some data . I think dd is safe as long as skip or count are not passed but I'm not sure whether this is the case on all platforms. But it has no advantage except for performance. So just use pv if you want its fancy progress report, or cat if you don't. | {
"source": [
"https://unix.stackexchange.com/questions/224277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
224,415 | In Linux, a finished execution of a command such as cp or dd doesn't mean that the data has been written to the device. One has to, for example, call sync , or invoke the "Safely Remove" or "Eject" function on the drive. What's the philosophy behind such an approach? Why isn't the data written at once? Is there no danger that the write will fail due to an I/O error? | What's the philosophy behind such an approach? Efficiency (better usage of the disk characteristics) and performance (allows the application to continue immediately after a write). Why isn't the data written at once? The main advantage is the OS is free to reorder and merge contiguous write operations to improve their bandwidth usage (less operations and less seeks). Hard disks perform better when a small number of large operations are requested while applications tend to need a large number of small operations instead. Another clear optimization is the OS can also remove all but the last write when the same block is written multiple times in a short period of time, or even remove some writes all together if the affected file has been removed in the meantime. These asynchronous writes are done after the write system call has returned. This is the second and most user visible advantage. Asynchronous writes speeds up the applications as they are free to continue their work without waiting for the data to actually be on disk. The same kind of buffering/caching is also implemented for read operations where recently or often read blocks are retained in memory instead of being read again from the disk. Is there no danger that the write will fail due to an IO error? Not necessarily. That depends on the file system used and the redundancy in place. An I/O error might be harmless if the data can be saved elsewhere. Modern file systems like ZFS do self heal bad disk blocks. Note also that I/O errors do not crash modern OSes. If they happen during data access, they are simply reported to the affected application. If they happen during structural metadata access and put the file system at risk, it might be remounted read-only or made inaccessible. There is also a slight data loss risk in case of an OS crash, a power outage, or an hardware failure. This is the reason why applications that must be 100% sure the data is on disk (e.g. databases/financial apps) are doing less efficient but more secure synchronous writes. To mitigate the performance impact, many applications still use asynchronous writes but eventually sync them when the user saves explicitly a file (e.g. vim, word processors.) On the other hand, a very large majority of users and applications do not need nor care the safety that synchronous writes do provide. If there is a crash or power outage, the only risk is often to lose at worst the last 30 seconds of data.
Unless there is a financial transaction involved or something similar that would imply a cost much larger than 30 seconds of their time, the huge gain in performance (which is not an illusion but very real) asynchronous writes is allowing largely outperforms the risk. Finally, synchronous writes are not enough to protect the data written anyway. Should your application really need to be sure their data cannot be lost whatever happens, data replication on multiple disks and on multiple geographical locations need to be put in place to resist disasters like fire, flooding, etc. | {
"source": [
"https://unix.stackexchange.com/questions/224415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61003/"
]
} |
224,627 | Am building a script to find out when a list of servers was last fully updated with yum update . I can find it by a history |grep "yum update"|head -n 1 however, the problem is that a user could have launced it but didn't type "y" in the prompt. Another way I tried was with yum history ID | Login user | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
109 | <xyz user> | 2015-08-20 07:18 | Erase | 1 E<
108 | root <root> | 2015-08-18 08:56 | Update | 3 >
107 | root <root> | 2015-08-14 07:39 | Update | 1
106 | root <root> | 2015-08-14 07:38 | Update | 1
105 | root <root> | 2015-08-14 07:38 | Update | 3
104 | root <root> | 2015-08-13 07:31 | Update | 1
103 | root <root> | 2015-08-11 05:46 | Update | 1
102 | root <root> | 2015-08-11 05:46 | Update | 2
101 | root <root> | 2015-08-11 05:45 | Update | 3
100 | root <root> | 2015-08-11 05:45 | Update | 3
99 | root <root> | 2015-08-10 20:41 | Update | 1
98 | root <root> | 2015-08-05 02:35 | Update | 1
97 | root <root> | 2015-05-14 10:52 | Update | 1
96 | root <root> | 2015-05-01 02:59 | Obsoleting | 2
95 | root <root> | 2015-04-09 16:06 | Update | 1 <
94 | <xyz.user> | 2015-03-28 08:49 | Update | 1 ><
93 | <xyz.usert> | 2015-03-28 08:14 | Erase | 3 ><
92 | <xyz.user> | 2015-03-13 07:46 | Install | 6 ><
91 | <xyz.user> | 2015-03-13 05:45 | I, U | 24 >
90 | root <root> | 2015-03-04 01:24 | Update | 3 But I can't find a way of determining the date the yum update was launched and was successful. Since if I check, for example, the transaction ID 108 which is marked as "Update" launched on 18th, I don't find the command yum update for that particular date : history |grep 2015 |grep "yum update"
5182 20150313-054444 > yum update Another path I tried was with /var/log/yum.log but yum.log will show installs and updates also. If a package is updated while installing a package e:g: yum install varnish and it requires an update of certain packages eg:(varnish-libs-2.1.5-5.el6.i686, 3.0.7-1.el6.i686 etc) this will be shown as updated in the yum.log Is there a way to find the date a yum update was launched and it was successful? | You almost answered your question. Here is a way you can find latest 5 updated packages: grep Updated: /var/log/yum.log | tail -5 Output example: Aug 05 13:28:34 Updated: virt-manager-common-1.1.0-9.git310f6527.fc21.noarch
Aug 05 13:28:34 Updated: glusterfs-libs-3.5.5-2.fc21.i686
Aug 05 13:28:35 Updated: virt-manager-1.1.0-9.git310f6527.fc21.noarch
Aug 05 13:28:36 Updated: virt-install-1.1.0-9.git310f6527.fc21.noarch
Aug 05 13:28:38 Updated: glusterfs-3.5.5-2.fc21.i686 | {
"source": [
"https://unix.stackexchange.com/questions/224627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129073/"
]
} |
224,631 | let's say that i've done a find for .gif files and got a bunch of files back. I now want to test them to see if they are animated gifs. Can i do this via the command line? I have uploaded a couple of examples in the following, in case you want to experiment on them. Animated GIF image Static GIF Image | This can easily be done using ImageMagick identify -format '%n %i\n' -- *.gif
12 animated.gif
1 non_animated.gif identify -format %n prints the number of frames in the gif; for animated gifs, this number is bigger than 1. (ImageMagick is probably readily available in your distro's repositories for an easy install) | {
"source": [
"https://unix.stackexchange.com/questions/224631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27368/"
]
} |
224,697 | I have a file with a list of emails in it and each line has an email in it. I want to remove lines that contain the string example.com or test.com . I tried this: sed -i 's/example\.com//ig' file.txt But it will remove only the string example.com , how can I remove the entire line? | With GNU sed: sed '/example\.com/d;/test\.com/d' -i file.txt will remove the lines with example.com and test.com . From man sed : d Delete pattern space. Start next cycle. | {
"source": [
"https://unix.stackexchange.com/questions/224697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81823/"
]
} |
224,874 | Given input: hello: world foo bar baz
bar:
baz: bin boop bop fiz bang beep
bap: bim bam bop
boatkeeper: poughkeepsie I would like to sort it into most words at the top, to least at the end, like so: baz: bin boop bop fiz bang beep
hello: world foo bar baz
bap: bim bam bop
boatkeeper: poughkeepsie
bar: How would I do this with sort or some other tool? | You could do something like: awk '{print NF,$0}' file | sort -nr | cut -d' ' -f 2- We use awk to prefix the number of fields to each line. We then sort by that number and remove it with cut . | {
"source": [
"https://unix.stackexchange.com/questions/224874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70397/"
]
} |
224,969 | Is there a constant variable in awk , that store today's date? If not, is there a way that can store today's date for daily use? Let's say we have below file: boo,foo,2016-08-30
foo,boo,2016-07-31 And I need to compare the date $3 in the file, with today's date, regardless what it is. i.e below script awk -F, '{if($3>"2015-08-23"){print $0}}' where 2015-08-23 will be changed by the current date. | There are no built-in functions in standard awk to get a date, but
the date can easily be assigned to a variable. awk -F, -v date="$(date +%Y-%m-%d)" '$3>date' or in an awk script BEGIN {
str = "date +%Y-%m-%d";
str | getline date;
close(str);
}
$3>date gawk , does have built-in time functions , and strftime can be used. gawk -F, 'BEGIN{date=strftime("%Y-%m-%d")}$3>date' | {
"source": [
"https://unix.stackexchange.com/questions/224969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
224,992 | I read that there are two folders for unit files (not in user mode). /usr/lib/systemd/system/: units provided by installed packages
/etc/systemd/system/: units installed by the system administrator Conflicting with this understanding is the answer to this question: How to write startup script for Systemd . Can someone fill in the missing information so that I understand what is going on? ( UPDATE: The answer has been updated, and my understanding no longer conflicts with it. ) Also, it seems that the scripts are organized in subfolders within the /etc/systemd/system/ folder: getty.target.wants
multi-user.target.wants In another location I read that there are other locations. It seems these are for user-specific services. /usr/lib/systemd/user/ where services provided by installed packages go.
/etc/systemd/user/ where system-wide user services are placed by the system administrator.
~/.config/systemd/user/ where the user puts its own services. Update 2015-08-31: For the sake of others, here is a link to a related question I recently asked: Where do I put scripts executed by systemd units? | The best place to put system unit files: /etc/systemd/system Just be sure to add a target under the [Install] section, read "How does it know?" for details. UPDATE : /usr/local/lib/systemd/system is another option, read "Gray Area" for details." The best place to put user unit files: /etc/systemd/user or $HOME/.config/systemd/user but it depends on permissions and the situation. Note also that user services will only run while a user session is active. The truth is that systemd units (or as the intro sentence calls them, "unit configurations") can go anywhere —provided you are willing to make manual symlinks and you are aware of the caveats. It makes life easier to put the unit where systemctl daemon-reload can find it for some good reasons: Using a standard location means that systemd generators will find them and make them easy to enable at boot with systemctl enable . This is because your unit will automatically be added to a unit dependency tree (a unit cache). You do not need to think about permissions, because only the right privileged users can write to the designated areas. How does it know? And how exactly does systemctl enable know where to create the symlink? You hard code it within the
unit itself under the [install] section. Usually there is a line like [Install]
WantedBy = multi-user.target that corresponds to a predefined place on the filesystem.
This way, systemctl knows that this unit is dependent on a group of unit files called multi-user.target ("target" is the term used to designate unit dependency groups. You can list all groups with systemctl list-units --type target ). The group of unit files to be loaded with a target is put in a targetname.target.wants directory. This is just a directory full of symlinks (or the real thing). If your [Install] section says it is WantedBy the multi-user.target , but if a symlink to it does not exist in the multi-user.target.wants directory, then it will not load. When the systemd unit generators add your unit file to the dependency tree cache at boot (you can manually trigger generators with systemctl daemon-reload ), it automatically knows where to put the symlink—in this case in the directory /etc/systemd/system/multi-user.target.wants/ should you enable it. Key Points in the Manual: Additional units might be loaded into systemd ("linked") from
directories not on the unit load path. See the link command for
systemctl(1). Under systemctl, look for Unit File Commands Unit File Load Path Please read and understand the first sentence in the following quote from man systemd.unit (because it implies that all of the paths I mention here may not apply to you if your systemd was compiled with different paths): Unit files are loaded from a set of paths determined during compilation, described in the two tables below. Unit files found in directories listed earlier override files with the same name in directories lower in the list. When the variable $SYSTEMD_UNIT_PATH is set, the contents of this variable overrides the unit load path. If $SYSTEMD_UNIT_PATH ends with an empty component (":"), the usual unit load path will be appended to the contents of the variable. Table 1 and Table 2 from man systemd.unit are good. Load paths when running in system mode ( --system ). /etc/systemd/system Local configuration /run/systemd/system Runtime units /usr/lib/systemd/system Units of installed packages (or /lib/systemd/system in some cases, read man systemd.unit ) Load path when running in user mode ( --user ) There is a difference between per user units and all/global users units. User-dependent $XDG_CONFIG_HOME/systemd/user User configuration (only used when $XDG_CONFIG_HOME is set) $HOME/.config/systemd/user User configuration (only used when $XDG_CONFIG_HOME is not set) $XDG_RUNTIME_DIR/systemd/user Runtime units (only used when $XDG_RUNTIME_DIR is set) $XDG_DATA_HOME/systemd/user Units of packages that have been installed in the home directory (only used when $XDG_DATA_HOME is set) $HOME/.local/share/systemd/user Units of packages that have been installed in the home directory (only used when $XDG_DATA_HOME is not set) --global (all users) Units that apply to all users--meaning owned by each user, too. So each user can stop these services even if an administrator enables them at boot. /etc/systemd/user Local configuration for all users ( systemctl --global enable userunit.service ) /usr/lib/systemd/user Units of packages that have been installed system-wide for all users (or /lib/systemd/system in some cases, read man systemd.unit) /run/systemd/user Runtime units Gray Area On the one hand, the File Hierarchy Standard (also man file-hierarchy ) specifies that /etc is for local configurations that do not execute binaries. On the other hand
it specifies that /usr/local/ "is for use by the system administrator when installing software locally". You could also argue (if not just for the purpose of organization) that all system unit files should go under /usr/local/lib/systemd/system , but this is intended for unit files that are part of "software" not from a package manager.
The corresponding systemd user units that are system-wide could go under /usr/local/lib/systemd/user . Transient Unit Another forgotten place is nowhere at all! Perhaps a lesser-known program is systemd-run . You can use it to run transient units on the fly. see man systemd-run . For example, to schedule a reboot tomorrow morning at 4 a.m. (you might need --force to ensure a reboot happens): systemd-run -u restart --description="Restarts machine" --on-calendar="2020-12-18 04:00:00" systemctl --force reboot This will yield a transient unit file restart.service and a corresponding timer (because of the --on-calendar , (and indicated by transient=yes in the resulting transient unit definition). /run/systemd/transient/restart.service # This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=Restarts machine
[Service]
ExecStart=
ExecStart="/usr/bin/systemctl" "--force" "reboot" Note that there is also the more dangerous double force option --force --force , which tells the kernel to halt immediately (and, if you do not know what you're doing, unsafely, because it is almost equivalent to cutting the power). | {
"source": [
"https://unix.stackexchange.com/questions/224992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
225,062 | I ran the minimal Debian installation inside a VirtualBox instance, installed X11 and Awesome window manager manually (without any custom configuration yet) and installed VirtualBox additions as well (and enabled shared clipboard in settings). However, copy-pasting text from xterm terminal still doesn't seem to work: CTRL + C is sent as a signal to a terminal, and Shift + Insert inserts the text that I had selected (which probably means that it got copied to some buffer somehow), but it is still unavailable from the host operating system. | X11 uses two buffers: PRIMARY and CLIPBOARD . To copy/paste to the CLIPBOARD buffer you can often use CTRL-C and CTRL-V . You can insert to the PRIMARY buffer by selecting a text and paste from it by pressing the middle mouse button. If you want to use the CLIPBOARD buffer, put this in your ~/.Xresources file and use Ctrl + Shift + C and Ctrl + Shift + V to copy/paste from/to the CLIPBOARD buffer in xterm: xterm*VT100.Translations: #override \
Ctrl Shift <Key>V: insert-selection(CLIPBOARD) \n\
Ctrl Shift <Key>C: copy-selection(CLIPBOARD) You need to run xrdb -merge ~/.Xresources after putting that into the file. | {
"source": [
"https://unix.stackexchange.com/questions/225062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66352/"
]
} |
225,179 | How can I show spinner till command line finish it is job? In other words, If I am running a script and I want to show spinner while this script is running and the spinner disappears when the script finish it is job. Bellow is a common spinner code: i=1
sp="/-\|"
echo -n ' '
while true
do
printf "\b${sp:i++%${#sp}:1}"
done How can I link the previous spinner code to a command to let it show spinner while the command is running and the spinner disappears when the command finish it is job? If I include the command inside the loop it will loop with the spinner so what is the solution in this case? | Have your while loop watch for your real command to exit. I'll assume a Linux environment that has /proc entries for each PID, but you could slice it other ways: #!/bin/bash
# your real command here, instead of sleep
sleep 7 &
PID=$!
i=1
sp="/-\|"
echo -n ' '
while [ -d /proc/$PID ]
do
printf "\b${sp:i++%${#sp}:1}"
done | {
"source": [
"https://unix.stackexchange.com/questions/225179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
225,236 | I occasionally see things like: cat file | wc | cat > file2 Why do this? When will the results (or performance) differ (favourably) from simply: cat file | wc > file2 | cat file | wc | cat > file2 would usually be two useless uses of cat as that's functionally equivalent to: < file wc > file2 However, there may be a case for: cat file | wc -c over < file wc -c That is to disable the optimisation that many wc implementations do for regular files. For regular files, the number of bytes in the file can be obtained without having to read the whole content of the file, but just doing a stat() system call on it and retrieve the size as stored in the inode. Now, one may want the file to be read for instance because: the stat() information cannot be trusted (like for some files in /proc or /sys on Linux): $ < /sys/class/net/lo/mtu wc -c
4096
$ cat /sys/class/net/lo/mtu | wc -c
6 one wants to check how much of the data can be read (like in case of a failing hard drive). one just wants to obtain benchmarks on how fast the data can be read. one wants for the content of the file to be cached in memory. Of course, those are exceptions. In the general case, you'd rather use < file wc -c for performance reasons. Now, you can imagine even more far fetched scenarios where one may want to use: cat file | wc | cat > file2 : maybe wc has an apparmor profile or other security mechanism that prohibits it from reading or writing to files while it's allowed for cat (that would be unheard of) maybe cat is able to deal with large (as in > 2 32 bytes) files, but not wc on that system (things like that have been needed for some commands on some systems in the past). maybe one wants wc (and the first cat ) to run and read the whole file (and be killed at the very last minute) even if file2 can't be open for writing. maybe one wants to hide the failure (exit status) of opening or reading the content of file . Though wc < file > file2 || : would make more sense. maybe one wants to hide (from the output of lsof (list open files)) the fact that he's getting a word count from file or that he's storing a word count in file2 . | {
"source": [
"https://unix.stackexchange.com/questions/225236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62835/"
]
} |
225,401 | I check service status with systemctl status service-name . By default, I see few rows only, so I add -n50 to see more. Sometimes, I want to see full log, from start. It could have 1000s of rows. Now, I check it with -n10000 but that doesn't look like neat solution. Is there an option to check full systemd service log similar to less command? | Just use the journalctl command, as in: journalctl -u service-name.service Or, to see only log messages for the current boot: journalctl -u service-name.service -b For things named <something>.service , you can actually just use <something> , as in: journalctl -u service-name But for other sorts of units (sockets, targets, timers, etc), you need to be explicit. In the above commands, the -u flag is short for --unit , and specifies the name of the unit in which you're interested. -b is short for --boot , and restricts the output to only the current boot so that you don't see lots of older messages. See the journalctl man page for more information. | {
"source": [
"https://unix.stackexchange.com/questions/225401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32942/"
]
} |
225,412 | I installed Apache using this script https://gist.github.com/Benedikt1992/e88c2114fee15422a4eb The system is a freshly installed CentOS 6.7 minimal system. After installation I can find the apache in /usr/local/apache2/ but I can't start the apache with service or enable start on boot with chkconfig . What am I missing? | Just use the journalctl command, as in: journalctl -u service-name.service Or, to see only log messages for the current boot: journalctl -u service-name.service -b For things named <something>.service , you can actually just use <something> , as in: journalctl -u service-name But for other sorts of units (sockets, targets, timers, etc), you need to be explicit. In the above commands, the -u flag is short for --unit , and specifies the name of the unit in which you're interested. -b is short for --boot , and restricts the output to only the current boot so that you don't see lots of older messages. See the journalctl man page for more information. | {
"source": [
"https://unix.stackexchange.com/questions/225412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125010/"
]
} |
225,537 | In Linux, every single entity is considered as FILE. If I do vim <cd-Name> then, vim will open the directory content into it's editor, because, it do not differentiate between file and directories. But today while working, I encountered a thing, which I am curious to know about. I planned to open a file from nested directory vim a/b/c/d/file But instead of vim , I typed cd a/b/c/d/ and hit the TAB twice, but command was showing only the available directory of the "d" directory rather files. Don't the cd command honour " everything is a file "? Or am I missing something ? | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file?
S_ISDIR(m) directory?
S_ISCHR(m) character device?
S_ISBLK(m) block device?
S_ISFIFO(m) FIFO (named pipe)?
S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.)
S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"source": [
"https://unix.stackexchange.com/questions/225537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
225,541 | I use Mac's ssh to get access to a server. I find the emacs on the server can not recognize the ALT key. My server is RHEL with emacs23. | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file?
S_ISDIR(m) directory?
S_ISCHR(m) character device?
S_ISBLK(m) block device?
S_ISFIFO(m) FIFO (named pipe)?
S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.)
S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"source": [
"https://unix.stackexchange.com/questions/225541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130297/"
]
} |
225,549 | I've been looking for the QEMU Guest Agent for Ubuntu 12.04 LTS. It seems like the Guest Agent is included in the official Repository from Ubuntu 14.04 and up ( http://packages.ubuntu.com/trusty/qemu-guest-agent ). Is there a way to get the Guest Agent running in 12.04? Update Compiling and/or installing qemu-guest-agent from the Trusty Repos seems to be the solution. While testing different VMs I noticed that the hosts have different OS versions (one with Precise/KVM and the other with Trusty/Spice). So my problem seems to be related to the combination of host and guest OSes. I have opened another question for this! | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file?
S_ISDIR(m) directory?
S_ISCHR(m) character device?
S_ISBLK(m) block device?
S_ISFIFO(m) FIFO (named pipe)?
S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.)
S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"source": [
"https://unix.stackexchange.com/questions/225549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102403/"
]
} |
225,687 | If a Unix (Posix) process receives a signal, a signal handler will run. What will happen to it in a multithreaded process? Which thread receives the signal? In my opinion, the signal API should be extended to handle that (i.e. the thread of the signal handler should be able to be determined), but hunting for infos on the net I only found year long flames on the linux kernel mailing list and on different forums. As I understood, Linus' concept differed from the Posix standard, and first some compat layer was built, but now the Linux follows the posix model. What is the current state? | The entry in POSIX on " Signal Generation and Delivery " in "Rationale: System Interfaces General Information" says Signals generated for a process are delivered to only one thread. Thus, if more than one thread is eligible to receive a signal, one has to be chosen. The choice of threads is left entirely up to the implementation both to allow the widest possible range of conforming implementations and to give implementations the freedom to deliver the signal to the "easiest possible" thread should there be differences in ease of delivery between different threads. From the signal(7) manual on a Linux system: A signal may be generated (and thus pending) for a process as a whole
(e.g., when sent using kill(2) ) or for a specific thread (e.g., certain
signals, such as SIGSEGV and SIGFPE, generated as a consequence of executing a specific machine-language instruction are thread directed, as are
signals targeted at a specific thread using pthread_kill(3) ). A process-directed signal may be delivered to any one of the threads that does not
currently have the signal blocked. If more than one of the threads has the
signal unblocked, then the kernel chooses an arbitrary thread to which to
deliver the signal. And in pthreads(7) : Threads have distinct alternate signal stack settings. However, a new
thread's alternate signal stack settings are copied from the thread that
created it, so that the threads initially share an alternate signal
stack (fixed in kernel 2.6.16). From the pthreads(3) manual on an OpenBSD system (as an example of an alternate approach): Signals handlers are normally run on the stack of the currently executing
thread. (I'm currently not aware of how this is handled when multiple threads are executing concurrently on a multi-processor machine) The older LinuxThread implementation of POSIX threads only allowed distinct single threads to be targeted by signals. From pthreads(7) on a Linux system: LinuxThreads does not support the notion of
process-directed signals: signals may be sent only to specific threads. | {
"source": [
"https://unix.stackexchange.com/questions/225687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52236/"
]
} |
225,802 | To debug a JACK/Pulseaudio issue, I want to understand when and why the pulseaudio daemon is started by systemd (on Fedora). Using: $ ps -o'pid,ppid,args' `pgrep pulse` I see that the pulseaudio daemon is being started by systemd (pid=1) PID PPID COMMAND
2738 1 /usr/bin/pulseaudio --start However, I was unable to find any unit-file on my system containing pulseaudio or even just pulse . My specific questions are: A) Is there a way to determine the systemd unit that caused the creation of a specific process (in my example output, process 2738, the PA daemon)? B) Are there alternative approaches to find out which unit-dependency chain or other settings of systemd resulted in the invocation of /usr/bin/pulseaudio --start ? | A) Is there a way to determine the systemd unit that caused the creation of a specific process (in my example output, process 2738, the PA daemon)? Sure. You can run systemctl status <pid> and systemd will find you the unit that contains that PID. For example, on my system I find a dnsmasq process: # ps -fe | grep dnsmasq
nobody 18834 1193 0 Aug25 ? 00:00:10 /usr/sbin/dnsmasq ... Who started it? # systemctl status 18834
● NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2015-08-25 11:07:40 EDT; 1 day 21h ago
Main PID: 1193 (NetworkManager)
Memory: 1.1M
CGroup: /system.slice/NetworkManager.service
├─ 1193 /usr/sbin/NetworkManager --no-daemon
├─ 1337 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-wlp3s0....
├─18682 /usr/libexec/nm-openvpn-service
├─18792 /usr/sbin/openvpn --remote ovpn-phx2.redhat.com 443 tcp --nobind --dev redhat --de...
└─18834 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --... I also have a pulseaudio process: # ps -fe | grep pulseaudio
lars 2948 1 0 Aug25 ? 00:06:20 /usr/bin/pulseaudio --start Running systemctl status 2948 , I see: ● session-3.scope - Session 3 of user lars
Loaded: loaded (/run/systemd/system/session-3.scope; static; vendor preset: disabled)
Drop-In: /run/systemd/system/session-3.scope.d
└─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf
Active: active (running) since Tue 2015-08-25 11:09:23 EDT; 1 day 21h ago
CGroup: /user.slice/user-1000.slice/session-3.scope This tells me the pulseaudio was started from my desktop login session, rather than explicitly via systemd. | {
"source": [
"https://unix.stackexchange.com/questions/225802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130481/"
]
} |
225,902 | How can I get a list of packages, that last installed / upgraded by pacman / yaourt in Arch Linux including the timestamp? | To get a list of last installed packages, you can run: grep -i installed /var/log/pacman.log Example output of last installed packages: [2015-08-24 15:32] [ALPM] warning: /etc/pamac.conf installed as /etc/pamac.conf.pacnew
[2015-08-24 15:32] [ALPM] installed python-packaging (15.3-1)
[2015-08-24 15:32] [ALPM] installed python2-packaging (15.3-1)
[2015-08-25 10:37] [ALPM] installed ttf-ubuntu-font-family (0.80-5)
[2015-08-25 10:43] [ALPM] installed ttf-google-fonts (20150805.r201-1)
[2015-08-25 10:44] [ALPM] installed ttf-ubuntu-font-family (0.80-5)
[2015-08-26 17:39] [ALPM] installed mozilla-extension-gnome-keyring-git (0.10.r36.378d9f3-1) To get a list of last upgraded packages, you can run: grep -i upgraded /var/log/pacman.log Example output of last upgraded packages: [2015-08-27 10:00] [ALPM] upgraded libinput (0.99.1-1 -> 1.0.0-1)
[2015-08-27 10:00] [ALPM] upgraded python2-mako (1.0.1-1 -> 1.0.2-1)
[2015-08-27 16:03] [ALPM] upgraded tdb (1.3.6-1 -> 1.3.7-1)
[2015-08-27 16:03] [ALPM] upgraded ldb (1.1.20-1 -> 1.1.21-1)
[2015-08-27 16:03] [ALPM] upgraded python2-mako (1.0.2-1 -> 1.0.2-2) To get a list of last installed or upgraded packages, you can run: grep -iE 'installed|upgraded' /var/log/pacman.log Example output of last upgraded packages: [2015-08-25 09:56] [ALPM] upgraded jdk (8u51-2 -> 8u60-1)
[2015-08-25 10:37] [ALPM] installed ttf-ubuntu-font-family (0.80-5)
[2015-08-25 10:43] [ALPM] installed ttf-google-fonts (20150805.r201-1)
[2015-08-25 10:44] [ALPM] installed ttf-ubuntu-font-family (0.80-5)
[2015-08-26 17:39] [ALPM] installed mozilla-extension-gnome-keyring-git (0.10.r36.378d9f3-1)
[2015-08-27 10:00] [ALPM] upgraded curl (7.43.0-1 -> 7.44.0-1)
[2015-08-27 10:00] [ALPM] upgraded gc (7.4.2-2 -> 7.4.2-3)
[2015-08-27 10:00] [ALPM] upgraded kmod (21-1 -> 21-2)
[2015-08-27 10:00] [ALPM] upgraded libinput (0.99.1-1 -> 1.0.0-1)
[2015-08-27 10:00] [ALPM] upgraded python2-mako (1.0.1-1 -> 1.0.2-1)
[2015-08-27 16:03] [ALPM] upgraded tdb (1.3.6-1 -> 1.3.7-1)
[2015-08-27 16:03] [ALPM] upgraded ldb (1.1.20-1 -> 1.1.21-1)
[2015-08-27 16:03] [ALPM] upgraded python2-mako (1.0.2-1 -> 1.0.2-2) | {
"source": [
"https://unix.stackexchange.com/questions/225902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24036/"
]
} |
225,943 | I need to write a shell script that runs in this way: ./myscript arg1 arg2_1 arg2_2 arg2_3 ....... arg2_# there is a for loop inside script for i in $@ However, as I know, $@ includes $1 up to $($#-1). But for my program $1 is distinctly different from $2 $3 $4 etc. I would like to loop from $2 to the end... How do i achieve this? Thank you:) | First, note that $@ without quotes makes no sense and should not be used. $@ should only be used quoted ( "$@" ) and in list contexts. for i in "$@" qualifies as a list context, but here, to loop over the positional parameters, the canonical, most portable and simpler form is: for i
do something with "$i"
done Now, to loop over the elements starting from the second one, the canonical and most portable way is to use shift : first_arg=$1
shift # short for shift 1
for i
do something with "$i"
done After shift , what used to be $1 has been removed from the list (but we've saved it in $first_arg ) and what used to be in $2 is now in $1 . The positional parameters have been shifted 1 position to the left (use shift 2 to shift by 2...). So basically, our loop is looping from what used to be the second argument to the last. With bash (and zsh and ksh93 , but that's it), an alternative is to do: for i in "${@:2}"
do something with "$i"
done But note that it's not standard sh syntax so should not be used in a script that starts with #! /bin/sh - . In zsh or yash , you can also do: for i in "${@[3,-3]}"
do something with "$i"
done to loop from the 3rd to the 3rd last argument. In zsh , $@ is also known as the $argv array. So to pop elements from the beginning or end of the arrays, you can also do: argv[1,3]=() # remove the first 3 elements
argv[-3,-1]=() ( shift can also be written 1=() in zsh ) In bash , you can only assign to the $@ elements with the set builtin, so to pop 3 elements off the end, that would be something like: set -- "${@:1:$#-3}" And to loop from the 3rd to the 3rd last: for i in "${@:3:$#-5}"
do something with "$i"
done POSIXly, to pop the last 3 elements of "$@" , you'd need to use a loop: n=$(($# - 3))
for arg do
[ "$n" -gt 0 ] && set -- "$@" "$arg"
shift
n=$((n - 1))
done | {
"source": [
"https://unix.stackexchange.com/questions/225943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
226,164 | That is literal, {fd} isn't a placeholder. I have a script that does this, and does not source in anything, nor does it reference {fd} anywhere else. Is this valid bash? exec {fd}</dev/watchdog | Rather than having to pick a file descriptor and hope it's available: exec 4< /dev/watchdog # Was 4 in use? Who knows? this notation asks the shell to pick a file descriptor that isn't currently in use, open the file for reading on that descriptor, and assign the number to the given variable ( fd ). $ exec {fd}< /dev/watchdog
$ echo $fd
10 | {
"source": [
"https://unix.stackexchange.com/questions/226164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
226,206 | I have a bunch of output going through sed and awk. How can I prefix the output with START and suffix the answer with END? For example, if I have All this code
on all these lines
and all these How could I get: START
All this code
on all these lines
and all these
END ? My attempt was: awk '{print "START";print;print "END"}' but I got ...
START
All this code
END
START
on all these lines
END
START
and all these
END | This works, as indicated by jasonwryan : awk 'BEGIN{print "START"}; {print}; END{print "END"}' | {
"source": [
"https://unix.stackexchange.com/questions/226206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
226,219 | I am under the following restrictions: I have a 1.0 GB .zip file on my computer which contains one file, a disk image of raspbian . When uncompressed, this file is 3.2 GB large and named 2015-11-21-raspbian-jessie.img . After having downloaded the zip file, I have just under 1.0 GB of storage space on my computer, not enough space to extract the image to my computer. This file needs to be uncompressed and written to an SD card using plain old dd . Is it possible for me to write the image to the SD card under these restrictions? I know it's possible to pipe data through tar and then pipe that data elsewhere, however, will this still work for the zip file format, or does the entire archive need to be uncompressed before any files are accessible? | Use unzip -p : unzip -p 2015-11-21-raspbian-jessie.zip 2015-11-21-raspbian-jessie.img | dd of=/dev/sdb bs=1M -p extracts files to stdout | {
"source": [
"https://unix.stackexchange.com/questions/226219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
226,267 | How can i create a script to count process run by each user in ps aux . I used this ps aux | awk '{print $1}' | grep root | wc -l but it lists count of root user only . I want to list number of process for each user. I need something like this: root 20
jamshi 15 | ps -fo user | sort | uniq -c is worth a try. The command ps -eo user=|sort|uniq -c will list process counts by user. ps -eo user=|sort|uniq -c
2 avahi
1 kernoops
1 messagebus
1 nobody
231 root
1 statd
5 steve
1 syslog If flipping the column order to read is required, pipe it through awk '{ print $2 " " $1 }' | {
"source": [
"https://unix.stackexchange.com/questions/226267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130790/"
]
} |
226,276 | I need to know if a process with a given PID has opened a port without using external commands.
I must then use the /proc filesystem. I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID directory will also contains a net/tcp file. My question is : do I need to go over all the threads net/tcp files, or will the port opened by threads be written into the process net/tcp file. | I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. That file is not a list of tcp ports opened by the process . It is a list of all open tcp ports in the current network namespace, and for processes running in the same network namespace is identical to the contents of /proc/net/tcp . To find ports opened by your process, you would need to get a list of socket descriptors from /proc/<pid>/fd , and then match those descriptors to the inode field of /proc/net/tcp . | {
"source": [
"https://unix.stackexchange.com/questions/226276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130795/"
]
} |
226,327 | I just found out by accident that CTRL + 4 closes programs reading stdin input from the command-line. This is how it looks when I type CTRL + 4 or CTRL + / into programs reading stdin $ cat
wefwef
wefwef
^\Quit
$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
^\Quit
$ I get ^\Quit displayed and then the program closes. What is the difference of this compared to using ^C or ^D ? What does ^\Quit do? Edit : Found out that CTRL + \ does the very same thing. | Ctrl+4 sends ^\ Terminals send characters (or more precisely bytes), not keys. When a key that represents a printable character is pressed, the terminal sends that character to the application. Most function keys are encoded as escape sequences: sequences of characters that start with the character number 27. Some keychords of the form Ctrl + character , and a few function keys, are sent as a control characters — in the ASCII character set , which all modern computers use as a basis (Unicode, ISO Latin- n , etc. are all supersets of ASCII), 33 characters are control characters: characters number 0 through 31 and 127. Control characters are not printable, but intended to have an effect in applications; for example character 10, which is Control-J (commonly written ^J), is a newline character, so when a terminal displays that character, it moves the cursor to the next line, rather than displaying a glyph. The escape character itself is a control character, ^[ (value 27). There aren't enough control characters to cover all Ctrl + character keychords. Only letters and the characters @[\]^_? have a corresponding control character. When you press Ctrl + 4 or Ctrl + $ (which I presume is Ctrl + Shift + 4 ), the terminal has to pick something to send. Depending on the terminal and its configuration, there are several common possibilities: The terminal ignores the Ctrl modifier and sends the character 4 or $ . The terminal sends an escape sequence that encodes the exact key and modifiers that were pressed. The terminal sends some other control character. Many terminal send control characters for some keys in the digit row: Ctrl + 2 → ^@ Ctrl + 3 → ^[ Ctrl + 4 → ^\ Ctrl + 5 → ^] Ctrl + 6 → ^^ Ctrl + 7 → ^_ Ctrl + 8 → ^? I don't know where this particular convention arose. Ctrl + | sends the same character because it's Ctrl + Shift + \ and the terminal sends ^\ whether the shift key was pressed or not. ^\ quits The terminal itself (more precisely, the generic terminal support in the kernel) interprets a few control characters specially. This interpretation can be configured to map different characters or turned off by applications that want to process the characters by themselves. One well-known such interpretation is that ^M, the character send by the Return key, sends the current line to the application, if the terminal is in cooked mode , in which applications receive input line by line. A few characters send signals to the application in the foreground. ^C sends the interrupt signal (SIGINT), which conventionally tells the application to stop what it's doing and read the user's next command. Non-interactive applications usually exit. ^\ sends the quit signal (SIGQUIT), which conventionally tells the application to exit as soon as possible without saving anything; many applications don't override the default behavior, which is to kill the application immediately¹. So when you press Ctrl + 4 (or anything that sends the ^\ character) in cat or bc , neither of which overrides the default behavior, the application is killed. The terminal itself prints the ^\ part of the message: it's a visual depiction of the character that you typed, and the terminal is in cooked mode and with echo turned on (characters are displayed by the terminal as soon as you type them, as opposed to non-echo mode where the characters are only sent to the application, which may or may not choose to display them). The Quit part comes from bash: it notices that its child process died from a quit signal, and that's its way of letting you know. Shells handle all common signals, so that if you type ^\ in a shell, you don't kill your session, you just get a new prompt, same as ^C. You can play with terminal settings with the stty command. ¹ And traditionally generate a core dump , but many systems disable that by default nowadays. | {
"source": [
"https://unix.stackexchange.com/questions/226327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
Subsets and Splits