source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
220,503 | I have a set of packages (*.rpm). For each package I can do rpm -qRp <package> to list requires, but I would like to install them (those requires) without installing the packages themselves. The requires all live in enabled repositories. Is there some easy way to do this without writing my own script that would parse output of rpm -qRp ... for example. I know I could do it by installing everything with requires ( yum localinstall ) and then uninstalling the original packages, but the problem is that my set contains packages with both dependencies and conflicts in between them. The required packages however don't conflict. I would have to do multiple yum localinstall <list> followed by yum remove <list> and make sure the packages in list don't conflict. I there a better way? I would basically like something like yum-builddep , but for requires, not buildrequires. My distros are Fedora / RHEL | You can use the yum deplist command to generate a list of package dependencies: $ yum deplist bind dependency: /bin/bash provider: bash.x86_64 4.3.39-5.fc21 dependency: /bin/sh provider: bash.x86_64 4.3.39-5.fc21 dependency: bind-libs(x86-64) = 32:9.9.6-10.P1.fc21 provider: bind-libs.x86_64 32:9.9.6-10.P1.fc21 dependency: coreutils provider: coreutils.x86_64 8.22-22.fc21[...] Grab the provider: lines from this for a list of packages: $ yum deplist bind | awk '/provider:/ {print $2}' | sort -ubash.x86_64bind-libs.x86_64coreutils.x86_64glibc.i686glibc.x86_64grep.x86_64krb5-libs.x86_64libcap.x86_64libcom_err.x86_64libxml2.x86_64openssl-libs.x86_64shadow-utils.x86_64systemd.x86_64zlib.x86_64 Send this output to yum install to install the packages: $ yum deplist bind | awk '/provider:/ {print $2}' | sort -u | xargs yum -y install | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27687/"
]
} |
220,510 | When I use any variation of English, US international (with dead keys,altGr dead keys or alternative) on my Linux Mint machine I always encounter this behaviour. When I press one of these keys: ' " and then follow them with a 'non-accentable' character like a [ or a b no output comes out at all. Whereas in Windows US-International it would print [ or b . If I wanted to type this I would have to escape each dead key with a space instead of with any 'non-accentable' character. This is annoying when programming (not really, but I trained with the Windows 'Qwerty International' on typing.io and switching back and forth between the systems is irritating). Is there any way to change that so it works like in Windows? | On Ubuntu 14.04 I did the following: 1) Installed uim using the Software Manager, other packages like uim-xim , uim-gtk2 , uim-gtk3 and uim-qt are auto installed. See https://launchpad.net/ubuntu/+source/uim . 2) Defined environmental variables by adding the next lines to ~/.profile , this way the custom compose key sequences only apply to the current user: # Restart the X-server after making alterations using:# $ sudo restart lightdm# It seems only GTK_IM_MODULE or QT_IM_MODULE needs to be defined.export GTK_IM_MODULE="uim"export QT_IM_MODULE="uim" 3) To mimic Window US International keyboards I saved one of the following files at ~/.XCompose : https://gist.githubusercontent.com/guiambros/b773ee85746e06454596/raw/0ea6d7f7cf9a6ff38b4cafde24dd43852e46d5e3/.XCompose or http://pastebin.com/vJg6G0th This worked for me after 1) restarting Ubuntu or 2) just the X-server by entering the following command in a terminal: $ sudo restart lightdm NB: Restarting only seems necessary after altering the ~/.profile file, alterations to ~/.XCompose will take effect the next time an application (Terminal, Gedit, etc.) starts. To check whether the environmental variables are set right, enter the following command in your terminal: $ printenv | grep IM_MODULE Many thanks to: https://wrgms.com/using-xcompose-with-chrome-and-sublime-text About custom compose key sequences: http://manpages.ubuntu.com/manpages/trusty/man5/XCompose.5.html https://help.ubuntu.com/community/ComposeKey About custom keyboard mapping: https://help.ubuntu.com/community/Custom%20keyboard%20layout%20definitions | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126507/"
]
} |
220,515 | I have installed Arch Linux + XFCE and installed Eclipse Mars on it. It works fine except for the mouse scroll. Does anyone know what I should look for on this issue? | I have the same problem using Arch Linux + GNOME Shell 3.16. I also use the PyDev plugin in Eclipse Mars. I fixed the issue by enabling the option "Show vertical scrollbar?" in Window->Preferences->PyDev->Editor-> Overview Rule Minimap. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126513/"
]
} |
220,518 | Why isn't there the Unix API? I mean, as there is the Windows API. I know a lot of things in the Unix world is modular, and those things put together creates a whole system. This sounds good but does create some problems when you try to make a native Unix app. For example, you want to program a nice word processor with a cool name WP. The Windows version of WP will be built by calling the Windows API. You can either code this directly in C, or use any of the various wrapper libraries out there. But still, the program must be constructed by calling winapi, which provides every single functionality that a programmer may need to build a Windows app, from basic system calls to GUI, 3D, multimedia or anything else with more than a decade of backwards compatibility. If it weren't like this, Wine could never exist. Now you want to create a Unix version of WP. The standard C and C++ library and the POSIX API is very stable and well supported in any Unix variants. The problem occurs when you try to do more. So you need to create a window for WP, but how? There is X11, but this is not the only one. People think X11 should be replaced and now are making two incompatible replacements, Wayland and Mir. Even for X11, there is Xlib and xcb. xcb claims they are 'better', and it is true in some ways, but where is the documentation? You eventually choose Xlib to do the task, but the X11 standard by itself only defines very basic features. Anything else you'd expect for a GUI application such as window events or clipboard support needs to be dealt with extensions, by calling XInternAtom . This sentence is merely my personal opinion, but the use of Atoms in X is extremely unintuitive. And another problem is that not every window manager for X support these extensions well. So let's just leave this dirtiness to the developers of GTK+ and Qt, who break backwards compatibility per each new version. Is it even possible to have portable drag-and-drop support in Linux? It really seems to me that the Unix community is killing themselves in the Desktop world. I know that the things I mentioned doesn't even matter to set up a BSD server, but it does matter if you ever try to build a portable native Linux app. What is making up all of this mess? Is there really an effort to clean this up and standardize things for the modern desktop environment of Unix? Why isn't there the Unix API? Will there ever be one? | To be called UNIX you need to go through a certification process that requires (among other things) that you implement the POSIX standard. So your question is completely invalid. There is UNIX API, it's called POSIX. EDIT:Here is the list of requirements: http://www.unix.org/version4/overview.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
220,576 | I need to remove the last comma of each line. I have only one csv file, which looks like this: 98,N,N,N,N,S,99,N,N,N,N,S,101,Y,Y,Y,Y,S, I have the following script, but it is not working: for fname in conv2015_10_LogicalComponent_CosProfile.csvdo sed 's/.$//' $fname > tmp.tmp mv tmp.tmp $fnamedone | this should work. for fname in conv2015_10_LogicalComponent_CosProfile.csv do cat $fname | sed 's/.$//' > tmp.tmp mv tmp.tmp $fname done Another option is if you use GNU Sed "-i" option: then you only need to do this: sed -i 's/.$//' filename Additionaly to clarify why "." is used there instead of ",". This is regular expression which matches almost any character, so if there is ";" it would replace that as well. You be more precise you can change ".$" to ",$". EDIT: I noticed that you mentioned that you actually have whitespaces at the end. So this code works, even with whitespaces. Proven on Solaris. cat filename | sed 's/,[[:blank:]]*$//g > tmp.tmpmv tmp.tmp desired_filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126554/"
]
} |
220,586 | My first part of the requirement: I would like to extract a single file from ex1234.zip . The structure and contents of ex1234.zip : ex1234 (directory) directory1 ex1234 (directory) directory2 ex1234.csv I want to be able to extract only ex1234.csv file but will not know the name. Second part is to able to do this for all exXXXX.zip that sit in the same directory. ex1234.zipex3245.zipex8829.zipexXXXX.zip… Output will be: ex1234.csvex3245.csvex8829.csvexXXXX.csv Real sample: $ less CW2178470.zipArchive: CW2178470.zipZip file size: 26108 bytes, number of entries: 26-rw---- 2.0 fat 108 bl defN 15-Aug-04 09:37 CW2178470/CW2178470.csv-rw---- 2.0 fat 1363 bl defN 15-Aug-04 09:37 CW2178470/config/BusinessContactApprovers.csv-rw---- 2.0 fat 158 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/announcements.xml-rw---- 2.0 fat 1037 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/Plan/plan.xml-rw---- 2.0 fat 141 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/Plan/tasks.xml-rw---- 2.0 fat 2408 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/FI_Doc208411460_doc.xml-rw---- 2.0 fat 215 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/MessageBoard/nb_27482kst.26ihyzj_.htm-rw---- 2.0 fat 2364 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/MessageBoard/messageboard.xml-rw---- 2.0 fat 1250 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/team.xml-rw---- 2.0 fat 22016 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/Doc208411460.doc-rw---- 2.0 fat 9973 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/audithistory.xml-rw---- 2.0 fat 6731 bl defN 15-Aug-04 09:37 CW2178470/CW2178470/ws.xml-rw---- 2.0 fat 308 bl defN 15-Aug-04 09:37 CW2178470/xsd/WSFolder.xsd-rw---- 2.0 fat 4897 bl defN 15-Aug-04 09:37 CW2178470/xsd/Task.xsd-rw---- 2.0 fat 770 bl defN 15-Aug-04 09:37 CW2178470/xsd/ContractWorkspace.xsd-rw---- 2.0 fat 4754 bl defN 15-Aug-04 09:37 CW2178470/xsd/AuditHistory.xsd-rw---- 2.0 fat 25564 bl defN 15-Aug-04 09:37 CW2178470/xsd/CommonTypes.xsd-rw---- 2.0 fat 5657 bl defN 15-Aug-04 09:37 CW2178470/xsd/MessageBoard.xsd-rw---- 2.0 fat 2471 bl defN 15-Aug-04 09:37 CW2178470/xsd/Plan.xsd-rw---- 2.0 fat 337 bl defN 15-Aug-04 09:37 CW2178470/xsd/InternalContractWorkspace.xsd-rw---- 2.0 fat 1045 bl defN 15-Aug-04 09:37 CW2178470/xsd/SalesContractRequest.xsd-rw---- 2.0 fat 3133 bl defN 15-Aug-04 09:37 CW2178470/xsd/FolderItem.xsd-rw---- 2.0 fat 906 bl defN 15-Aug-04 09:37 CW2178470/xsd/ContractRequest.xsd-rw---- 2.0 fat 8973 bl defN 15-Aug-04 09:37 CW2178470/xsd/WorkspaceTypes.xsd-rw---- 2.0 fat 4645 bl defN 15-Aug-04 09:37 CW2178470/xsd/Team.xsd-rw---- 2.0 fat 781 bl defN 15-Aug-04 09:37 CW2178470/xsd/SalesContractWorkspace.xsd26 files, 112005 bytes uncompressed, 21940 bytes compressed: 80.4%(END) | You could use unzip like this: unzip -j file[.zip] [file] [-x xfile] where -j means junk paths, file[.zip] is your archive name, [file] is the archive member to be processed and [-x xfile] is the list of archive members to be excluded from processing. All these options are described in detail in the man page. So in your case, running for example: unzip -j ex1234.zip '*/*.csv' -x '*/*/*' will extract in the current directory all files matching *.csv from depth level 2 in the ex1234.zip archive (excluding archive members from depth level 3 and below as '*/*/*' means paths that match at least two / ). Now, to process all the archives in the current directory you could run: for zipfile in *.zip; do unzip -j "$zipfile" '*/*.csv' -x '*/*/*'; done which extracts the .csv file from each archive in the current directory (that's why -j is needed). In your particular case, there's no .csv on level 1 depth so you could also run: for zipfile in *.zip; do unzip -j "$zipfile" '*.csv' -x '*/*/*'; done which should yield the same result. To dry-run and see which files will be extracted (their archive paths) without actually extracting them, replace -j with -qql : for zipfile in *.zip; do unzip -qql "$zipfile" '*/*.csv' -x '*/*/*'; done As a side note, the -j option could be omitted iff the .csv files to be extracted were on depth level 1 (i.e. no parent dir); in that case you could simply run: for zipfile in *.zip; do unzip "$zipfile" '*.csv' -x '*/*'; done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126564/"
]
} |
220,588 | If I have below two dates: 2015-09-12,2015-08-13 And I need to get the number of days between them, I will use the below code: awk -F'[-,]' '{print 360*($4-$1)+30*($5-$2)+($6-$3)}' The output for this code will be -29 while actually the difference is 29 | You can define functions in awk like: awk -F'[-,]' ' function abs(v) {return v < 0 ? -v : v} {print abs(360*($4-$1)+30*($5-$2)+($6-$3))}' Or: function abs(v) {v += 0; return v < 0 ? -v : v} For the returned value to be converted to its canonical form for both negative and positive numbers and strings to always be converted to numbers. Without it, abs($0) where the input record is 1e2 would yield 1e2 , while for -1e2 , it would yield -100 . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
220,685 | I have a new install of Red Hat and I'm trying to do a " yum install tmux " but it is throwing a no package available error: [root@PSCHQVP20017 ~]# yum install tmuxLoaded plugins: product-id, refresh-packagekit, rhnplugin, security, subscription-managerThis system is receiving updates from RHN Classic or RHN Satellite.Setting up Install ProcessNo package tmux available.Error: Nothing to do | You did not specify the distribution you are using. I guess it is rhel/centos 5 or 6: if so, you just need to add the proper EPEL repository to your YUM configuration and then yum updateyum install tmux No need to download/compile it manually. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33183/"
]
} |
220,744 | I am using my computer clusters to run a MD C program but I can not use full potential of those clusters. But the this node have 16 CPUs and I also only give 15 jobs for those CPUs. But I can not fully use those potentials. below is the result of ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDroot 1 0.0 0.0 23636 1624 ? Ss Jun15 0:01 /sbin/initroot 2 0.0 0.0 0 0 ? S Jun15 0:00 [kthreadd]root 3 0.0 0.0 0 0 ? S Jun15 0:00 [migration/0]root 4 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/0]root 5 0.0 0.0 0 0 ? S Jun15 0:00 [migration/0]root 6 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/0]root 7 0.0 0.0 0 0 ? S Jun15 0:00 [migration/1]root 8 0.0 0.0 0 0 ? S Jun15 0:00 [migration/1]root 9 0.0 0.0 0 0 ? S Jun15 0:05 [ksoftirqd/1]root 10 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/1]root 11 0.0 0.0 0 0 ? S Jun15 0:00 [migration/2]root 12 0.0 0.0 0 0 ? S Jun15 0:00 [migration/2]root 13 0.0 0.0 0 0 ? S Jun15 0:01 [ksoftirqd/2]root 14 0.0 0.0 0 0 ? S Jun15 0:02 [watchdog/2]root 15 0.0 0.0 0 0 ? S Jun15 0:00 [migration/3]root 16 0.0 0.0 0 0 ? S Jun15 0:00 [migration/3]root 17 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/3]root 18 0.0 0.0 0 0 ? S Jun15 0:17 [watchdog/3]root 19 0.0 0.0 0 0 ? S Jun15 0:00 [migration/4]root 20 0.0 0.0 0 0 ? S Jun15 0:00 [migration/4]root 21 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/4]root 22 0.0 0.0 0 0 ? S Jun15 0:02 [watchdog/4]root 23 0.0 0.0 0 0 ? S Jun15 0:00 [migration/5]root 24 0.0 0.0 0 0 ? S Jun15 0:00 [migration/5]root 25 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/5]root 26 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/5]root 27 0.0 0.0 0 0 ? S Jun15 0:00 [migration/6]root 28 0.0 0.0 0 0 ? S Jun15 0:00 [migration/6]root 29 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/6]root 30 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/6]root 31 0.0 0.0 0 0 ? S Jun15 0:00 [migration/7]root 32 0.0 0.0 0 0 ? S Jun15 0:00 [migration/7]root 33 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/7]root 34 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/7]root 35 0.0 0.0 0 0 ? S Jun15 0:00 [migration/8]root 36 0.0 0.0 0 0 ? S Jun15 0:00 [migration/8]root 37 0.0 0.0 0 0 ? S Jun15 0:01 [ksoftirqd/8]root 38 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/8]root 39 0.0 0.0 0 0 ? S Jun15 0:00 [migration/9]root 40 0.0 0.0 0 0 ? S Jun15 0:00 [migration/9]root 41 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/9]root 42 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/9]root 43 0.0 0.0 0 0 ? S Jun15 0:00 [migration/10]root 44 0.0 0.0 0 0 ? S Jun15 0:00 [migration/10]root 45 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/10]root 46 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/10]root 47 0.0 0.0 0 0 ? S Jun15 0:00 [migration/11]root 48 0.0 0.0 0 0 ? S Jun15 0:00 [migration/11]root 49 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/11]root 50 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/11]root 51 0.0 0.0 0 0 ? S Jun15 0:00 [migration/12]root 52 0.0 0.0 0 0 ? S Jun15 0:00 [migration/12]root 53 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/12]root 54 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/12]root 55 0.0 0.0 0 0 ? S Jun15 0:00 [migration/13]root 56 0.0 0.0 0 0 ? S Jun15 0:00 [migration/13]root 57 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/13]root 58 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/13]root 59 0.0 0.0 0 0 ? S Jun15 0:00 [migration/14]root 60 0.0 0.0 0 0 ? S Jun15 0:00 [migration/14]root 61 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/14]root 62 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/14]root 63 0.0 0.0 0 0 ? S Jun15 0:00 [migration/15]root 64 0.0 0.0 0 0 ? S Jun15 0:00 [migration/15]root 65 0.0 0.0 0 0 ? S Jun15 0:00 [ksoftirqd/15]root 66 0.0 0.0 0 0 ? S Jun15 0:00 [watchdog/15]root 67 0.0 0.0 0 0 ? S Jun15 0:00 [events/0]root 68 0.0 0.0 0 0 ? S Jun15 0:16 [events/1]root 69 0.0 0.0 0 0 ? S Jun15 0:26 [events/2]root 70 0.0 0.0 0 0 ? S Jun15 1:24 [events/3]root 71 0.0 0.0 0 0 ? S Jun15 4:17 [events/4]root 72 0.0 0.0 0 0 ? S Jun15 4:11 [events/5]root 73 0.0 0.0 0 0 ? S Jun15 0:31 [events/6]root 74 0.0 0.0 0 0 ? S Jun15 2:34 [events/7]root 75 0.0 0.0 0 0 ? S Jun15 1:11 [events/8]root 76 0.0 0.0 0 0 ? S Jun15 11:39 [events/9]root 77 0.0 0.0 0 0 ? S Jun15 1:15 [events/10]root 78 0.0 0.0 0 0 ? S Jun15 0:01 [events/11]root 79 0.0 0.0 0 0 ? S Jun15 0:00 [events/12]root 80 0.0 0.0 0 0 ? S Jun15 0:01 [events/13]root 81 0.0 0.0 0 0 ? S Jun15 0:00 [events/14]root 82 0.0 0.0 0 0 ? S Jun15 0:00 [events/15]root 83 0.0 0.0 0 0 ? S Jun15 0:00 [cpuset]root 84 0.0 0.0 0 0 ? S Jun15 0:00 [khelper]root 85 0.0 0.0 0 0 ? S Jun15 0:00 [netns]root 86 0.0 0.0 0 0 ? S Jun15 0:00 [async/mgr]root 87 0.0 0.0 0 0 ? S Jun15 0:00 [pm]root 88 0.0 0.0 0 0 ? S Jun15 0:09 [sync_supers]root 89 0.0 0.0 0 0 ? S Jun15 0:07 [bdi-default]root 90 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/0]root 91 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/1]root 92 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/2]root 93 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/3]root 94 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/4]root 95 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/5]root 96 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/6]root 97 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/7]root 98 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/8]root 99 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/9]root 100 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/10]root 101 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/11]root 102 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/12]root 103 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/13]root 104 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/14]root 105 0.0 0.0 0 0 ? S Jun15 0:00 [kintegrityd/15]root 106 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/0]root 107 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/1]root 108 0.0 0.0 0 0 ? S Jun15 0:01 [kblockd/2]root 109 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/3]root 110 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/4]root 111 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/5]root 112 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/6]root 113 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/7]root 114 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/8]root 115 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/9]root 116 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/10]root 117 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/11]root 118 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/12]root 119 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/13]root 120 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/14]root 121 0.0 0.0 0 0 ? S Jun15 0:00 [kblockd/15]root 122 0.0 0.0 0 0 ? S Jun15 0:00 [kacpid]root 123 0.0 0.0 0 0 ? S Jun15 0:00 [kacpi_notify]root 124 0.0 0.0 0 0 ? S Jun15 0:00 [kacpi_hotplug]root 125 0.0 0.0 0 0 ? S Jun15 0:00 [ata/0]root 126 0.0 0.0 0 0 ? S Jun15 0:00 [ata/1]root 127 0.0 0.0 0 0 ? S Jun15 0:00 [ata/2]root 128 0.0 0.0 0 0 ? S Jun15 0:00 [ata/3]root 129 0.0 0.0 0 0 ? S Jun15 0:00 [ata/4]root 130 0.0 0.0 0 0 ? S Jun15 0:00 [ata/5]root 131 0.0 0.0 0 0 ? S Jun15 0:00 [ata/6]root 132 0.0 0.0 0 0 ? S Jun15 0:00 [ata/7]root 133 0.0 0.0 0 0 ? S Jun15 0:00 [ata/8]root 134 0.0 0.0 0 0 ? S Jun15 0:00 [ata/9]root 135 0.0 0.0 0 0 ? S Jun15 0:00 [ata/10]root 136 0.0 0.0 0 0 ? S Jun15 0:00 [ata/11]root 137 0.0 0.0 0 0 ? S Jun15 0:00 [ata/12]root 138 0.0 0.0 0 0 ? S Jun15 0:00 [ata/13]root 139 0.0 0.0 0 0 ? S Jun15 0:00 [ata/14]root 140 0.0 0.0 0 0 ? S Jun15 0:00 [ata/15]root 141 0.0 0.0 0 0 ? S Jun15 0:00 [ata_aux]root 142 0.0 0.0 0 0 ? S Jun15 0:00 [ksuspend_usbd]root 143 0.0 0.0 0 0 ? S Jun15 0:00 [khubd]root 144 0.0 0.0 0 0 ? S Jun15 0:00 [kseriod]root 145 0.0 0.0 0 0 ? S Jun15 0:00 [md/0]root 146 0.0 0.0 0 0 ? S Jun15 0:00 [md/1]root 147 0.0 0.0 0 0 ? S Jun15 0:00 [md/2]root 148 0.0 0.0 0 0 ? S Jun15 0:00 [md/3]root 149 0.0 0.0 0 0 ? S Jun15 0:00 [md/4]root 150 0.0 0.0 0 0 ? S Jun15 0:00 [md/5]root 151 0.0 0.0 0 0 ? S Jun15 0:00 [md/6]root 152 0.0 0.0 0 0 ? S Jun15 0:00 [md/7]root 153 0.0 0.0 0 0 ? S Jun15 0:00 [md/8]root 154 0.0 0.0 0 0 ? S Jun15 0:00 [md/9]root 155 0.0 0.0 0 0 ? S Jun15 0:00 [md/10]root 156 0.0 0.0 0 0 ? S Jun15 0:00 [md/11]root 157 0.0 0.0 0 0 ? S Jun15 0:00 [md/12]root 158 0.0 0.0 0 0 ? S Jun15 0:00 [md/13]root 159 0.0 0.0 0 0 ? S Jun15 0:00 [md/14]root 160 0.0 0.0 0 0 ? S Jun15 0:00 [md/15]root 161 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/0]root 162 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/1]root 163 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/2]root 164 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/3]root 165 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/4]root 166 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/5]root 167 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/6]root 168 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/7]root 169 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/8]root 170 0.0 0.0 0 0 ? S Jun15 0:00 [md_misc/9]root 178 0.0 0.0 0 0 ? S Jun15 0:00 [kswapd0]root 179 0.0 0.0 0 0 ? S Jun15 0:00 [kswapd1]root 180 0.0 0.0 0 0 ? SN Jun15 0:00 [ksmd]root 181 0.0 0.0 0 0 ? SN Jun15 0:10 [khugepaged]root 182 0.0 0.0 0 0 ? S Jun15 0:00 [aio/0]root 183 0.0 0.0 0 0 ? S Jun15 0:00 [aio/1]root 184 0.0 0.0 0 0 ? S Jun15 0:00 [aio/2]root 185 0.0 0.0 0 0 ? S Jun15 0:00 [aio/3]root 186 0.0 0.0 0 0 ? S Jun15 0:00 [aio/4]root 187 0.0 0.0 0 0 ? S Jun15 0:00 [aio/5]root 188 0.0 0.0 0 0 ? S Jun15 0:00 [aio/6]root 189 0.0 0.0 0 0 ? S Jun15 0:00 [aio/7]root 190 0.0 0.0 0 0 ? S Jun15 0:00 [aio/8]root 191 0.0 0.0 0 0 ? S Jun15 0:00 [aio/9]root 192 0.0 0.0 0 0 ? S Jun15 0:00 [aio/10]root 193 0.0 0.0 0 0 ? S Jun15 0:00 [aio/11]root 194 0.0 0.0 0 0 ? S Jun15 0:00 [aio/12]root 195 0.0 0.0 0 0 ? S Jun15 0:00 [aio/13]root 196 0.0 0.0 0 0 ? S Jun15 0:00 [aio/14]root 197 0.0 0.0 0 0 ? S Jun15 0:00 [aio/15]root 198 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/0]root 199 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/1]root 200 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/2]root 201 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/3]root 202 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/4]root 203 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/5]root 204 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/6]root 205 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/7]root 206 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/8]root 207 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/9]root 208 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/10]root 209 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/11]root 210 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/12]root 211 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/13]root 212 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/14]root 213 0.0 0.0 0 0 ? S Jun15 0:00 [crypto/15]root 218 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/0]root 219 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/1]root 220 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/2]root 221 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/3]root 222 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/4]root 223 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/5]root 224 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/6]root 225 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/7]root 226 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/8]root 227 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/9]root 228 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/10]root 229 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/11]root 230 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/12]root 231 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/13]root 232 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/14]root 233 0.0 0.0 0 0 ? S Jun15 0:00 [kthrotld/15]root 246 0.0 0.0 0 0 ? S Jun15 0:00 [kpsmoused]root 247 0.0 0.0 0 0 ? S Jun15 0:00 [usbhid_resumer]root 277 0.0 0.0 0 0 ? S Jun15 0:00 [kstriped]root 611 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_0]root 612 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_1]root 613 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_2]root 614 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_3]root 615 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_4]root 616 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_5]root 763 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_6]root 764 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_wq_6]root 769 0.0 0.0 0 0 ? S Jun15 0:00 [scsi_eh_7]root 770 0.0 0.0 0 0 ? S Jun15 0:00 [fw_event0]root 773 0.0 0.0 0 0 ? S Jun15 0:54 [poll_0_status]root 818 0.0 0.0 0 0 ? S Jun15 0:12 [jbd2/sda3-8]root 819 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 820 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 821 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 822 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 823 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 824 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 825 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 826 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 827 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 828 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 829 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 830 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 831 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 832 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 833 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 834 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 921 0.0 0.0 11672 1652 ? S<s Jun15 0:00 /sbin/udevd -droot 1292 0.0 0.0 0 0 ? S Jun15 1:03 [edac-poller]root 1892 0.0 0.0 0 0 ? S Jun15 0:00 [mlx4]root 1894 0.0 0.0 0 0 ? S Jun15 0:00 [mlx4_opreq]root 1895 0.0 0.0 0 0 ? S Jun15 0:12 [flush-8:0]root 1896 0.0 0.0 0 0 ? S Jun15 0:12 [mlx4_sense]root 1905 0.0 0.0 0 0 ? S Jun15 0:00 [mlx4_en]root 2077 0.0 0.0 11684 1664 ? S< Jun15 0:00 /sbin/udevd -droot 2125 0.0 0.0 0 0 ? S Jun15 0:00 [jbd2/sda1-8]root 2126 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2127 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2128 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2129 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2130 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2131 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2132 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2133 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2134 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2135 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2136 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2137 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2138 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2139 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2140 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2141 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2142 0.0 0.0 0 0 ? S Jun15 0:00 [jbd2/sda5-8]root 2143 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2144 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2145 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2146 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2147 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2148 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2149 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2150 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2151 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2152 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2153 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2154 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2155 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2156 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2157 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2158 0.0 0.0 0 0 ? S Jun15 0:00 [ext4-dio-unwrit]root 2202 0.0 0.0 0 0 ? S Jun15 0:00 [kauditd]root 2250 0.0 0.0 0 0 ? S Jun15 0:00 [mthcacatas]root 2253 0.0 0.0 0 0 ? S Jun15 0:00 [mlx4_ib]root 2254 0.0 0.0 0 0 ? S Jun15 0:00 [ib_mad1]root 2261 0.0 0.0 0 0 ? S Jun15 0:00 [iw_cxgb3]root 2265 0.0 0.0 0 0 ? S Jun15 0:00 [nesewq]root 2266 0.0 0.0 0 0 ? S Jun15 0:00 [nesdwq]root 2270 0.0 0.0 11668 1660 ? S< Jun15 0:00 /sbin/udevd -droot 2273 0.0 0.0 0 0 ? S Jun15 0:00 [ib_mcast]root 2274 0.0 0.0 0 0 ? S Jun15 0:00 [ib_inform]root 2275 0.0 0.0 0 0 ? S Jun15 0:00 [local_sa]root 2276 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/0]root 2277 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/1]root 2278 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/2]root 2279 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/3]root 2280 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/4]root 2281 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/5]root 2282 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/6]root 2283 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/7]root 2284 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/8]root 2285 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/9]root 2286 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/10]root 2287 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/11]root 2288 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/12]root 2289 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/13]root 2290 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/14]root 2291 0.0 0.0 0 0 ? S Jun15 0:00 [ib_cm/15]root 2292 0.0 0.0 0 0 ? S Jun15 0:51 [ipoib]root 2293 0.0 0.0 0 0 ? S Jun15 3:17 [ipoib_auto_mode]root 2358 0.0 0.0 0 0 ? S Jun15 0:01 [ib_addr]root 2359 0.0 0.0 0 0 ? S Jun15 0:00 [iw_cm_wq]root 2360 0.0 0.0 0 0 ? S Jun15 0:00 [rdma_cm]root 2594 0.0 0.0 93224 896 ? S<sl Jun15 0:04 auditd155 2664 0.0 0.0 60788 9360 ? S Jun15 0:11 /usr/libexec/systemtap/stap-serverd -r 2.6.32-220.el6.x86_64 -a x86_64 --log=/var/log/stap-server/logroot 2691 0.0 0.0 250856 1560 ? Sl Jun15 0:01 /sbin/rsyslogd -i /var/run/syslogd.pid -c 4root 2714 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/0]root 2715 0.0 0.0 0 0 ? S Jun15 0:24 [kondemand/1]root 2716 0.0 0.0 0 0 ? S Jun15 0:48 [kondemand/2]root 2717 0.0 0.0 0 0 ? S Jun15 2:03 [kondemand/3]root 2718 0.0 0.0 0 0 ? S Jun15 5:41 [kondemand/4]root 2719 0.0 0.0 0 0 ? S Jun15 1:25 [kondemand/5]root 2720 0.0 0.0 0 0 ? S Jun15 1:26 [kondemand/6]root 2721 0.0 0.0 0 0 ? S Jun15 2:09 [kondemand/7]root 2722 0.0 0.0 0 0 ? S Jun15 0:40 [kondemand/8]root 2723 0.0 0.0 0 0 ? S Jun15 1:28 [kondemand/9]root 2724 0.0 0.0 0 0 ? S Jun15 0:09 [kondemand/10]root 2725 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/11]root 2726 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/12]root 2727 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/13]root 2728 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/14]root 2729 0.0 0.0 0 0 ? S Jun15 0:00 [kondemand/15]root 2740 0.0 0.0 9204 644 ? Ss Jun15 17:34 irqbalancerpc 2754 0.0 0.0 19024 984 ? Ss Jun15 0:02 rpcbindrpcuser 2772 0.0 0.0 23200 1204 ? Ss Jun15 0:00 rpc.statdroot 2800 0.0 0.0 0 0 ? S Jun15 3:22 [rpciod/0]root 2801 0.0 0.0 0 0 ? S Jun15 3:22 [rpciod/1]root 2802 0.0 0.0 0 0 ? S Jun15 3:21 [rpciod/2]root 2803 0.0 0.0 0 0 ? S Jun15 3:18 [rpciod/3]root 2804 0.0 0.0 0 0 ? S Jun15 3:16 [rpciod/4]root 2805 0.0 0.0 0 0 ? S Jun15 3:13 [rpciod/5]root 2806 0.0 0.0 0 0 ? S Jun15 3:10 [rpciod/6]root 2807 0.0 0.0 0 0 ? S Jun15 3:08 [rpciod/7]root 2808 0.0 0.0 0 0 ? S Jun15 26:27 [rpciod/8]root 2809 0.0 0.0 0 0 ? S Jun15 3:58 [rpciod/9]root 2810 0.0 0.0 0 0 ? S Jun15 3:34 [rpciod/10]root 2811 0.0 0.0 0 0 ? S Jun15 3:17 [rpciod/11]root 2812 0.0 0.0 0 0 ? S Jun15 3:11 [rpciod/12]root 2813 0.0 0.0 0 0 ? S Jun15 3:18 [rpciod/13]root 2814 0.0 0.0 0 0 ? S Jun15 3:07 [rpciod/14] Because the output excess the website limit I store the exact file on GoogleDrive if this may be useful. output of ps So are there some methods can solve this problem so I can fully use all those potential of CPUs? | As mentioned in a comment and without seeing any of your code or other information (which would not be on-topic here anyway) all I can say is your program appears to be IO bound. The means while your calculations could use more of your CPU, they are having to wait on data and spending many cycles waiting rather than calculating. This can be due to the way you write code (loop optimization, vectorization, etc). A common problem is accessing your data in a way that causes lots of cache misses. Your multiple cores may also share a L3 cache and if they are all working on different data you are probably running into a lot of misses there. Fetching memory from main DRAM is orders of magnitude slower than the on-die cache memory. If your data comes from the disks then you also have to deal with disk read latency in addition to DRAM latency. If the data is across ethernet or some other interconnect you also have to consider the latency in those reads. You can also incur lots of waiting on disk writes, particularly if you are writing often in small chunks rather than buffering for large writes. In short, there are lot of IO performance considerations that are limiting your ability to maintain 100% CPU usage on your cores. My recommendation is to profile your code, figure out your IO limitations and make sure your code is efficient and go forward from there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94887/"
]
} |
220,750 | The switch configured on the server (Centos 7) is configured as trunk for VLAN#115,2014.I have loaded # lsmod | grep 8021q# modprobe 8021q I would like to configure an IP address on the server using the VLAN#115Performing the following configuration: ifcfg-em1 TYPE=EthernetBOOTPROTO=noneDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_PEERDNS=yesIPV6_PEERROUTES=yesIPV6_FAILURE_FATAL=noNAME=em1UUID=c0c4d851-d762-4301-8c20-d6128aee5261DEVICE=em1ONBOOT=yes ifcfg-em1.115 TYPE=EthernetBOOTPROTO=noneIPADDR=172.31.141.242PREFIX=24GATEWAY=172.31.141.1DEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_PEERDNS=yesIPV6_PEERROUTES=yesIPV6_FAILURE_FATAL=noNAME=em1.115UUID=c0c4d851-d762-4301-8c20-d6128aee5261DEVICE=em1.115VLAN=yesONBOOT=yes I ended up being not able to restart the network service.The error message appearing is : Failed to start LSB: Bring up/down networking. What am doing wrong ? | it seems that disabling NetworkManager did the trick :) systemctl stop NetworkManagersystemctl disable NetworkManager | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119354/"
]
} |
220,796 | I have a personal folder /a/b on the server with permission 700. I don't want others to list the contents in /a/b. The owner of /a is root. Now I need to open the full authorities of directory /a/b/c to all users. I changed the permission of /a/b/c to 777 but it is still inaccessible for others. | You can. You just have to set the executable bit on the /a/b directory. That will prevent being able to see anything in b , but you can still do everything if you go directly to a/b/c . % mkdir -p a/b/c% chmod 711 a/b% sudo chown root a/b% ll a/b ls: cannot open directory a/b: Permission denied% touch a/b/c/this.txt% ls a/b/c this.txt Beware that while others cannot list the contents of /a/b , they can access files in that directory if they guess the name of the file. % echo hello | sudo tee a/b/f% cat a/b/fhello% cat a/b/doesntexistcat: a/b/doesntexist: No such file or directory So be sure to maintain proper permissions (no group/world) on all other files/directories within the b directory, as this will avoid this caveat. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
220,834 | Suppose, we have the following files in a directory: test.txttest.txt~/subdir test1.txt test1.txt~ When I run rm -r ./*.*~ inside top dir only test.txt~ is removed. Why doesn't it perform the recursive removal despite the fact that I used recursive flag? You can reproduce my case with the following script: #create dir and 1-st level test filesmkdir dircd dirtouch test.txttouch test.txt~#create subdir and 2-nd level test filesmkdir subdircd subdir/touch test1.txt~touch test1.txtcd ..rm -r ./*.*~ | *.*~ does not expand to any directories, it will just match any file or directory in the current directory that has a . in it somewhere and ends in ~ If you would like to find all the files that end in ~ from the directory you're in I would use find like find -type f -name '*~' -delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27689/"
]
} |
220,844 | I have a text file with multiple rows. In every row I have the date in the following format: 12/2/201515/9/201308/3/201102/5/2005... I want to create a folder for every line in this text file using a simple for loop as follows: #!/bin/bash -ffor f in $(cat ./file.txt ); do mkdir ${f}done This code didn't work because I have the character / in the date format.How can I create the folders and remove this character from the folder name at the same time? So the output folders names will be like this: 1222015159201308320110252005... | You can use a command like this: sed -e 's/\///g' < file.txt | xargs mkdir The sed command will strip the newlines from file.txt and the pipe to xargs will run mkdir for each line in the file. An equivalent command (suggest by don_crissti) using tr instead of sed is: tr -d / < file.txt | xargs mkdir | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
220,852 | This question is very similar to this one: List of available services For my specific case, I'm wondering if there is a specific command to show the full list of services under Ubuntu. I did run a ls /etc/init.d and it does show a pretty comprehensive list, but some entries are missing. I did see apache2 , myslq , gdm , and a whole lot of others. But some of them are missing. One example is plexmediaserver (I've installed plex server recently and had some difficulties in finding the name of it's service) So to rephrase this question in as few words as possible: Is there a way to get the full list of possibilities of {x} for service {x} status Note: using Ubuntu 15.04 | Since Ubuntu has recently switched over to systemd, some services will be listed by upstart. service --status-all and others, by systemd systemctl -l --type service --all or as root systemctl -r --type service --all However software still using the init system will likely be listed in /etc/init.d Looking through all of those will yield most services registered on the system. There is a good summary on systemd over on the Arch wiki | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/220852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114416/"
]
} |
220,853 | So, I have a shell script for updating a MySQL database that looks something like this: #!/bin/shmysql -h "localhost" -u "root" "-p********" "database" < "update.sql"sleep 5sh $0 It sleeps for 5 seconds and then the sh $0 reruns the script infinitely, without my intervention. However, my question is about memory: I am relatively new to shell scripts, but is the memory slowly piling up in a loop like this? Does the remote server recycle the memory, or will the script eventually reach a cut-off? (Or, will it crash from a memory leak?) | This is not a loop, but recursion and the memory increases linear over the time, which is what you don't want. If you want a loop with constant memory usage, you can do it this way: #!/bin/shwhile 1; do mysql -h "localhost" -u "root" "-p********" "database" < "update.sql" sleep 5done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89000/"
]
} |
220,941 | I'm trying the following in a Bash script: MV_PARAMS='"foo 1" "foo 2"'mv $MV_PARAMS What I want to actually execute is: mv "foo 1" "foo 2" But it doesn't seem to work. trying this: mv "$MV_PARAMS" Doesn't work either. | What you should do is use an array: mv_params=("foo 1" "foo 2")mv "${mv_params[@]}" The array expansion will properly handle array elements with whitespace or special characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126771/"
]
} |
220,966 | syntax for sudo is user ALL=(ALL) ALL whereby: 1st field is the user that can execute sudo 3rd field is the users that can be sudo into 4th field is the commands that can be executed as sudo 2nd field is to put the host(s) in which the sudo can be run on. ================================= I do not understand the use of 2nd field. How do we enable sudo on host A for the use on another host B ? | 2nd ALL = on all hosts (if you distribute the same sudoers file to many computers) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122717/"
]
} |
220,967 | I've just had a message today from Ubuntu 11.04 that I have only 100 MB left, so I cleaned up some files, and I got 200 MB. Then, after a couple of hours, suddenly I have only 26 MB?! I tried df , du via mount --bind , /forcefsck with reboot - nothing could should what the culprit is - finally I searched big files, realized /var/log/syslog is 100MB+ and /var/log/kern.log is 200MB+ - blanked them with sudo bash -c 'echo > ...' and rebooted, and now I have some spare MB. But now, I realize I have another problem with df : $ df -hFilesystem Size Used Avail Use% Mounted on/dev/sda5 9,7G 8,8G 385M 96% /none 963M 696K 962M 1% /devnone 969M 12K 969M 1% /dev/shmnone 969M 252K 969M 1% /var/runnone 969M 0 969M 0% /var/lock/dev/sda6 9,7G 8,1G 1,1G 89% /media/disk$ df -h --block-size MFilesystem 1M-blocks Used Available Use% Mounted on/dev/sda5 9845M 8960M 385M 96% /none 963M 1M 962M 1% /devnone 969M 1M 969M 1% /dev/shmnone 969M 1M 969M 1% /var/runnone 969M 0M 969M 0% /var/lock/dev/sda6 9845M 8235M 1110M 89% /media/disk Note so for / , it says there are 9845M total; and Used 8960M - then remaining would be 9845-8960 = 885 M, however, here I have only 385M Available. Also, for /media/disk , it says total 9845M, Used 8235M - then remaining would be 9845-8235 = 1610, however, here I have only 1110M Available. In both cases, there is exactly 500 MB difference. Where did this difference come from - and can I reclaim it? Here is also lsof | grep 'deleted' - can't see nothing suspicious here: $ lsof | grep 'deleted'nautilus 1911 user 21u REG 8,5 142760 260562 /home/user/.local/share/gvfs-metadata/home (deleted)nautilus 1911 user 22w REG 8,5 32768 268807 /home/user/.local/share/gvfs-metadata/home-fe882154.log (deleted)python 1919 user 8u REG 8,5 4096 392258 /tmp/ffiqRK968 (deleted)python 2165 user 5w REG 8,5 0 132261 /home/user/.[SNIP].lock (deleted)python 2166 user 5w REG 8,5 0 132261 /home/user/.[SNIP].lock (deleted)python 2185 user 21r REG 8,5 142760 260562 /home/user/.local/share/gvfs-metadata/home (deleted)python 2185 user 22r REG 8,5 32768 268807 /home/user/.local/share/gvfs-metadata/home-fe882154.log (deleted)gnome-ter 2279 user 27u REG 8,5 640 392575 /tmp/vte5KDX2X (deleted)gnome-ter 2279 user 28u REG 8,5 4936 392605 /tmp/vteKRDX2X (deleted)gnome-ter 2279 user 29u REG 8,5 648 392947 /tmp/vteMZDX2X (deleted)ubuntuone 2544 user 17u REG 8,5 4096 392335 /tmp/ffiMErq0V (deleted)bamfdaemo 3235 user 12r REG 8,5 143868 269077 /home/user/.local/share/gvfs-metadata/root (deleted)bamfdaemo 3235 user 13r REG 8,5 32768 272310 /home/user/.local/share/gvfs-metadata/root-18092a02.log (deleted)firefox 5291 user 59u REG 8,5 33288 132262 /var/tmp/etilqs_YdeZiWSd5iQwJ4U (deleted)firefox 5291 user 60w REG 8,5 32768 132271 /var/tmp/etilqs_MNLXhEaEqoXMm9b (deleted)firefox 5291 user 70u REG 8,5 34952 132297 /var/tmp/etilqs_yXDdwVeMxmmdpNz (deleted) | Most probably this is ext2 , ext3 or ext4 file system which reserve a few percent of disk space (by default 5%) to be used only by specified users (usually root). If you create file system with mke2fs then -m option is what you are looking for: -m reserved-blocks-percentage Specify the percentage of the filesystem blocks reserved for the super-user. This avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are pre‐ vented from writing to the filesystem. The default percentage is 5%. You can change this value on already existing ext file system with tune2fs -m . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8069/"
]
} |
221,970 | This is merely just a vocabulary question, but which keeps turning around in my head. It comes from a practice exam from a LPIC preparation book. The correct answer according to the book is that ~/Documents is a relative directory because it is relative to the home directory. However, this book contains an honourable ratio of typos and mistakes so I cannot take for granted everything which is written there. Here I do not agree because for me ~ acts as a variable expanded by the shell into either the content of the $HOME variable or the current user home directory path (cf. man bash ), so the actual path is /home/myuser/Documents which is indeed an absolute directory. Even Wikipedia , for once, seems of no help to me on this topic (even if it seems to confirm that the book is wrong on this one): An absolute or full path points to the same location in a file system regardless of the current working directory. To do that, it must contain the root directory. By contrast, a relative path starts from some given working directory, avoiding the need to provide the full absolute path. Here again, I do not agree: according to this definition, the path /opt/kde3/bin/../lib which does not depends of the current working directory should be an absolute one, however my current understanding of this matches the book's author making this path a relative one. A quick web-search is just adding to my frustration, according to Webster Dictionary : absolute path - A path relative to the root directory. Its first character must be the pathname separator. So $HOME/Documents , or even just $HOME would not be considered absolute directories? Or does this definition implies variable expansion? What about the shell's ~ character? Is there any reliable definition of relative vs. absolute directory I can find somewhere and am I wrong all of the way? | This is essentially a question about the definition of terms. So for your purposes, the answer is whatever LPIC wants. But we can come to some conclusions based on technical facts: If you passed '~/Documents' to a system call , it would look for a directory named exactly ~ in the current directory (and probably fail). So, by the notion of pathnames used by the kernel , this is a relative path — but that's not what we meant. ~ is syntax implemented by the shell (and other programs which imitate it for convenience) which expands it into a real pathname. To illustrate, ~/Documents is approximately the same thing as $HOME/Documents (again, shell syntax). Since $HOME should be an absolute path, the value of $HOME/Documents is also an absolute path. But the text $HOME/Documents or ~/Documents has to be expanded by the shell in order to become the path we mean. Thus if I wanted to be precise and consistent, I would say that ~/Documents is a fragment of shell-script which expands to an absolute path. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/221970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53965/"
]
} |
221,983 | yum update --security installs only security updates. I think it's an extension from the yum-security plugin. Is there an equivalent dnf command? (dnf replaced yum in Fedora 22) | You can use dnf-automatic with three settings: apply_updates = yesdownload_updates = yesupgrade_type = security (Default configuration file is /etc/dnf/automatic.conf ) or using: dnf updateinfo list security to get all available updates, then update them manually. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/221983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11996/"
]
} |
222,054 | NOTE: If client devices ( computer B in this example) want to obtain internet through the gateway computer, maybe they still need to configure nameserver resolution. This is not explained here (a gateway does not necessarily serve internet). I am trying to understand the fundamentals of networks routing. So I am experimenting with my LAN (I don't need internet for now, just LAN communications). I know the network configuration matters are a rather complex thing, but I am just trying to make a computer (say A) to act as a gateway for another (say B) (both running Ubuntu Linux). I only need B to be capable to reach the router, that is only reachable for A. This is the case: Router for computer A --> 192.168.0.1Computer A - eth0 --> 192.168.0.2Computer A - eth1 --> 192.168.1.1Computer B - eth0 --> 192.168.1.2 Computer A connects fine to router . Computer A and B connect fine (ping, SSH... etc) between them . Computer B can not reach the router for computer A. I was thinking that just adding on B Computer A as default gateway and activating IP Forwarding on A would make B to be able to reach the router for A: luis@ComputerB:~$ sudo route add default gw 192.168.1.1luis@ComputerB:~$ sudo routeltarget gateway source proto scope dev tbl127.0.0.0 broadcast 127.0.0.1 kernel link lo local127.0.0.0 8 local 127.0.0.1 kernel host lo local127.0.0.1 local 127.0.0.1 kernel host lo local127.255.255.255 broadcast 127.0.0.1 kernel link lo local192.168.1.0 broadcast 192.168.1.2 kernel link eth0 local192.168.1.2 local 192.168.1.2 kernel host eth0 local192.168.1.255 broadcast 192.168.1.2 kernel link eth0 localdefault 192.168.1.1 eth0169.254.0.0 16 link eth0192.168.1.0 24 192.168.1.2 kernel link eth0 And on Computer A (the intermediate gateway): root@ComputerA:~$ echo 1 > /proc/sys/net/ipv4/ip_forward Computer B can still ping computer A, but router for A does not answer: luis@ComputerB:~$ ping 192.168.0.1PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.^C (No ping response) Is this the correct procedure to make a computer running Linux to act as a gateway for another computer in a simple manner? | You are almost there you just need to make sure traffic gets back to B. Right now you have forwarded traffic from B to the outside world but A doesn't know how to get traffic back to B. You need A to keep some state about the connections going through it. To do this you will want to enable NAT . You already have step one which is allow forwarding. Then you need to add a few firewall rules using iptables : iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE This says: on the network address translation table, after we have figured out the routing of a packet on output eth0 (the external), replace the return address information with our own so the return packets come to us. Also, remember that we did this (like a lookup table that remembers this connection). iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT Allow packets that want to come from eth1 (the internal interface) to go out eth0 (the external interface). iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT Use that lookup table we had from before to see if the packet arriving on the external interface actually belongs to a connection that was already initiated from the internal. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
222,121 | Input: ARCHIVE B1_NAME B2_NAME B3_NAME ELEMENT INFO_NAM WERT PROCID-------- -------- -------- -------- -------- -------- ---- ------15MinAvg AIRSS 33-GIS DMDMGIS1 I MvAvr15m 1123 CP15MinAvg AIRSS 33-GIS DMDMGIS1 P MvAvr15m 2344 CP15MinAvg AIRSS 33-GIS DMDMGIS1 Q MvAvr15m 4545 CP15MinAvg AIRSS 33-GIS DMDMGIS2 I MvAvr15m 6576 CP15MinAvg AIRSS 33-GIS DMDMGIS2 P MvAvr15m 4355 CP15MinAvg AIRSS 33-GIS DMDMGIS2 Q MvAvr15m 6664 CP Output: ARCHIVE B1_NAME B2_NAME B3_NAME ELEMENT WERT-------- -------- -------- -------- ------- ----15MinAvg AIRSS 33-GIS DMDMGIS1 I 112315MinAvg AIRSS 33-GIS DMDMGIS1 P 234415MinAvg AIRSS 33-GIS DMDMGIS1 Q 454515MinAvg AIRSS 33-GIS DMDMGIS2 I 657615MinAvg AIRSS 33-GIS DMDMGIS2 P 435515MinAvg AIRSS 33-GIS DMDMGIS2 Q 6664 I want to delete the two columns INFO_NAM and PROCID from my input file. | This has been answered before elsewhere on Stack Overflow: delete a column with awk or sed Deleting columns from a file with awk or from command line on linux etc.. I believe awk is the best for that. awk '{print $1,$2,$3,$4,$5,$7}' file It is possible to use cut as well. cut -f1,2,3,4,5,7 file | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
222,129 | I'm on Debian 8 (stable) (Linux zenbook 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux) and tried to apt-get update && apt-get upgrade yesterday. There was an upgrade for mysql-server-5.5 to 5.5.44-0+deb8u1 , with which apt-get is having problems since then. The program hangs on Setting up mysql-server-5.5 . I then tried to purge the package and install it again, but with the same result. I don't know what else I can try. Suggestions? $ sudo apt-get install mysql-serverReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following extra packages will be installed: mysql-server-5.5Suggested packages: tinycaThe following NEW packages will be installed: mysql-server mysql-server-5.50 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/2,088 kB of archives.After this operation, 32.6 MB of additional disk space will be used.Do you want to continue? [Y/n] Preconfiguring packages ...Selecting previously unselected package mysql-server-5.5.(Reading database ... 308703 files and directories currently installed.)Preparing to unpack .../mysql-server-5.5_5.5.44-0+deb8u1_amd64.deb ...Unpacking mysql-server-5.5 (5.5.44-0+deb8u1) ...Selecting previously unselected package mysql-server.Preparing to unpack .../mysql-server_5.5.44-0+deb8u1_all.deb ...Unpacking mysql-server (5.5.44-0+deb8u1) ...Processing triggers for systemd (215-17+deb8u1) ...Processing triggers for man-db (2.7.0.2-5) ...Setting up mysql-server-5.5 (5.5.44-0+deb8u1) ... | This was very odd. Two hours ago I left my computer with apt-get still running. I just came back to see that setting up had finally finished. Before, I had waited maybe 15-30 minutes, so this is certainly not normal. But now I can purge the package and reinstall it within seconds. So, the solution to this seems to simply be "wait". It works itself out in the end. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
222,134 | I have a file with values organized in columns, and separated by commas , as in the file below: 324,01,1,113333600000,1,,016,01,1,134954200000,1,,770,01,1,109069200000,1,,853,01,1,111518800000,1,, When I use the following awk command, the delimiter is being changed from commas to spaces Code: awk -F, '{$4=$4/1024}{print $0}' The output becomes: 324 01 1 110677343.75 1 016 01 1 131791210.93 1 770 01 1 106512890.62 1 853 01 1 108905078.12 1 How can I change the value of the field without changing the delimiter? | Set OFS as well: awk -F, -v OFS=, '{$4=$4/1024}1' The OFS determines how output fields are delimited. If you don't set it, the default is a space. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
222,146 | I have the following code in a file named awktest1.awk : #!/bin/awk -fBEGIN{print "start"}{print $2, "\t", $5}END{print "end"} employee.txt where employee.txt contains the following data: 100 Thomas Manager Sales $5,000 200 Jason Developer Technology $5,500 300 Sanjay Sysadmin Technology $7,000 400 Nisha Manager Marketing $9,500 500 Randy DBA Technology $6,000 I run the awk command as: awk -f awktest1.awk but it just prints start and does not end. Can anyone help me out with what am I doing wrong here? | The error is giving the filename to process in the script; you should remove employee.txt from the script and run it as follows awk -f awktest1.awk employee.txt or even, if the script is executable, ./awktest1.awk employee.txt The script becomes #!/bin/awk -fBEGIN{print "start"}{print $2, "\t", $5}END{print "end"} As it is, awk is waiting for input from standard input instead of reading from a file. That's why it never ends... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121643/"
]
} |
222,163 | My Bananian Linux is wasting time at logon trying to get a DHCP lease for eth0 interface which is not connected. Well, the extender cable is connected to it, but nothing is on the other end. I have auto eth0iface eth0 inet dhcp set in my /etc/network/interfaces since I do want it to pick up ethernet in case it is connected, but I surely don't want to slow down the startup of the system if the cable is not connected to ethernet. I assumed system would know this automatically and would not attempt to get a DHCP lease for the interface. Here is what I see at load time (see the last three lines): After if understands that the lease isn't coming, it proceeds with the boot. Is there a way I could tell it not to DHCP if there isn't a connected cable? | If you specify allow-hotplug eth0 instead of auto eth0 in /etc/network/interfaces , then the connection will only be initiated by udev when something triggers it, instead of at every boot. That might be sufficient to handle your case, but not necessarily; the interfaces manpage mentions that (Interfaces marked "allow-hotplug" are brought up when udev detects them. This can either be during boot if the interface is already present, or at a later time, for example when plugging in a USB network card. Please note that this does not have anything to do with detecting a network cable being plugged in.) You might need to use /etc/network/if-up.d/00check-network-cable from the ifupdown-extra package to skip the interface if no cable is connected. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10890/"
]
} |
222,175 | Hi previous questions on this topic contain answers for Linux but does not work for Solaris 10. find . ! -readable -prune Does not work in solaris since -readable is not POSIX. What is the POSIX compliant command that exclude all “permission denied” messages from “find” in Solaris? Correct answers: jlliagre and random832 gave correct answers. | Here is a POSIX way to prune any non readable directory with find : find . \( -exec sh -c ' if [ ! -r "$1" ] ; then { exit 1 ; } ; else for i in "$1"/* ; do if [ -d "$i" -a ! -r "$i" ]; then exit 1; fi; done; fi ' sh {} \; -o -prune \) -a -print Note that if this is a full Solaris installation, GNU grep is available in /usr/sfw/bin/ggrep . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/127914/"
]
} |
222,218 | My file, PSS-A (Primary A)PSS-B (Primary B)PSS-C (Primary C)PSS-D (Primary D)PSS-E (Primary E)PSS-F (Primary F)PSS-G (Primary G)PSS-H (Primary H)PSS-I (Primary I)SPARE (SPARE) Output file, 1> PSS-A (Primary A) 2> PSS-B (Primary B) 3> PSS-C (Primary C) 4> PSS-D (Primary D) 5> PSS-E (Primary E) 6> PSS-F (Primary F) 7> PSS-G (Primary G) 8> PSS-H (Primary H) 9> PSS-I (Primary I)10> SPARE (SPARE) | If you want the same format that you have specified awk '{print NR "> " $s}' inputfile > outputfile otherwise, though not standard, most implementations of the cat command can print line numbers for you (numbers padded to width 6 and followed by TAB in at least the GNU, busybox, Solaris and FreeBSD implementations). cat -n inputfile > outputfile Or you can use grep -n (numbers followed by : ) with a regexp like ^ that matches any line: grep -n '^' inputfile > outputfile | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/222218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
222,221 | Ubuntu 14.04 on a desktop Source Drive: /dev/sda1: 5TB ext4 single drive volume Target Volume: /dev/mapper/archive-lvarchive: raid6 (mdadm) 18TB volume with lvm partition and ext4 There are roughly 15 million files to move, and some may be duplicates (I do not want to overwrite duplicates). Command used (from source directory) was: ls -U |xargs -i -t mv -n {} /mnt/archive/targetDir/{} This has been going on for a few days as expected, but I am getting the error in increasing frequency. When it started the target drive was about 70% full, now its about 90%. It used to be about 1/200 of the moves would state and error, now its about 1/5. None of the files are over 100Mb, most are around 100k Some info: $ df -hFilesystem Size Used Avail Use% Mounted on/dev/sdb3 155G 5.5G 142G 4% /none 4.0K 0 4.0K 0% /sys/fs/cgroupudev 3.9G 4.0K 3.9G 1% /devtmpfs 797M 2.9M 794M 1% /runnone 5.0M 4.0K 5.0M 1% /run/locknone 3.9G 0 3.9G 0% /run/shmnone 100M 0 100M 0% /run/user/dev/sdb1 19G 78M 18G 1% /boot/dev/mapper/archive-lvarchive 18T 15T 1.8T 90% /mnt/archive/dev/sda1 4.6T 1.1T 3.3T 25% /mnt/tmp$ df -iFilesystem Inodes IUsed IFree IUse% Mounted on/dev/sdb3 10297344 222248 10075096 3% /none 1019711 4 1019707 1% /sys/fs/cgroupudev 1016768 500 1016268 1% /devtmpfs 1019711 1022 1018689 1% /runnone 1019711 5 1019706 1% /run/locknone 1019711 1 1019710 1% /run/shmnone 1019711 2 1019709 1% /run/user/dev/sdb1 4940000 582 4939418 1% /boot/dev/mapper/archive-lvarchive 289966080 44899541 245066539 16% /mnt/archive/dev/sda1 152621056 5391544 147229512 4% /mnt/tmp Here's my output: mv -n 747265521.pdf /mnt/archive/targetDir/747265521.pdf mv -n 61078318.pdf /mnt/archive/targetDir/61078318.pdf mv -n 709099107.pdf /mnt/archive/targetDir/709099107.pdf mv -n 75286077.pdf /mnt/archive/targetDir/75286077.pdf mv: cannot create regular file ‘/mnt/archive/targetDir/75286077.pdf’: No space left on devicemv -n 796522548.pdf /mnt/archive/targetDir/796522548.pdf mv: cannot create regular file ‘/mnt/archive/targetDir/796522548.pdf’: No space left on devicemv -n 685163563.pdf /mnt/archive/targetDir/685163563.pdf mv -n 701433025.pdf /mnt/archive/targetDir/701433025.pd I've found LOTS of postings on this error, but the prognosis doesn't fit. Such issues as "your drive is actually full" or "you've run out of inodes" or even "your /boot volume is full". Mostly, though, they deal with 3rd party software causing an issue because of how it handles the files, and they are all constant, meaning EVERY move fails. Thanks. EDIT:here is a sample failed and succeeded file: FAILED (still on source drive) ls -lhs 702637545.pdf16K -rw-rw-r-- 1 myUser myUser 16K Jul 24 20:52 702637545.pdf SUCCEEDED (On target volume) ls -lhs /mnt/archive/targetDir/704886680.pdf104K -rw-rw-r-- 1 myUser myUser 103K Jul 25 01:22 /mnt/archive/targetDir/704886680.pdf Also, while not all files fail, a file which fails will ALWAYS fail. If I retry it over and over it is consistent. EDIT: Some additional commands per request by @mjturner $ ls -ld /mnt/archive/targetDirdrwxrwxr-x 2 myUser myUser 1064583168 Aug 10 05:07 /mnt/archive/targetDir$ tune2fs -l /dev/mapper/archive-lvarchivetune2fs 1.42.10 (18-May-2014)Filesystem volume name: <none>Last mounted on: /mnt/archiveFilesystem UUID: af7e7b38-f12a-498b-b127-0ccd29459376Filesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 289966080Block count: 4639456256Reserved block count: 231972812Free blocks: 1274786115Free inodes: 256343444First block: 0Block size: 4096Fragment size: 4096Group descriptor size: 64Blocks per group: 32768Fragments per group: 32768Inodes per group: 2048Inode blocks per group: 128RAID stride: 128RAID stripe width: 512Flex block group size: 16Filesystem created: Thu Jun 25 12:05:12 2015Last mount time: Mon Aug 3 18:49:29 2015Last write time: Mon Aug 3 18:49:29 2015Mount count: 8Maximum mount count: -1Last checked: Thu Jun 25 12:05:12 2015Check interval: 0 (<none>)Lifetime writes: 24 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8Default directory hash: half_md4Directory Hash Seed: 3ea3edc4-7638-45cd-8db8-36ab3669e868Journal backup: inode blocks$ tune2fs -l /dev/sda1tune2fs 1.42.10 (18-May-2014)Filesystem volume name: <none>Last mounted on: /mnt/tmpFilesystem UUID: 10df1bea-64fc-468e-8ea0-10f3a4cb9a79Filesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 152621056Block count: 1220942336Reserved block count: 61047116Free blocks: 367343926Free inodes: 135953194First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 732Blocks per group: 32768Fragments per group: 32768Inodes per group: 4096Inode blocks per group: 256Flex block group size: 16Filesystem created: Thu Jul 23 13:54:13 2015Last mount time: Tue Aug 4 04:35:06 2015Last write time: Tue Aug 4 04:35:06 2015Mount count: 3Maximum mount count: -1Last checked: Thu Jul 23 13:54:13 2015Check interval: 0 (<none>)Lifetime writes: 150 MBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8Default directory hash: half_md4Directory Hash Seed: a266fec5-bc86-402b-9fa0-61e2ad9b5b50Journal backup: inode blocks | Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem. Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution. (get a really good backup made of the filesystem)(unmount the filesystem)tune2fs -O ^dir_index /dev/fooe2fsck -fDvy /dev/foo(mount the filesystem) ext4: Mysterious “No space left on device”-errors ext4 has a feature called dir_index enabled by default, which is quite susceptible to hash-collisions. ...... ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45877/"
]
} |
222,229 | I am not an experienced Linux user. I am trying to select column data from 2 separate files and write to a third file using awk. I have tried to paste the files together i.e. paste file1 file2 and then awk but the data is appended on the next line (alternating). The data looks like this: file1 HZ880 0.00 HAM86 1.13 HAM40 1.60 file2 HZ880 -31.816826 115.757963 35.8909 0.0170 -.0170HAM86 -31.824923 115.761507 33.6108 0.0165 -.0165HAM40 -31.828528 115.762380 38.8434 0.0163 -.0163 How do I create a new file with column2 (file1) and column4 (file2)? I have tried the following: paste ${LEV_IN1} ${LEV_IN2} | awk '{print $2,$4}' > ${TEMP2} where LEV_IN1 is file1 and LEV_IN2 is file2 What am I doing wrong? | Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem. Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution. (get a really good backup made of the filesystem)(unmount the filesystem)tune2fs -O ^dir_index /dev/fooe2fsck -fDvy /dev/foo(mount the filesystem) ext4: Mysterious “No space left on device”-errors ext4 has a feature called dir_index enabled by default, which is quite susceptible to hash-collisions. ...... ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/127951/"
]
} |
222,231 | This conundrum is caused by running git pull from root. There are various reasons for me to do so... I want my device to update code when booting up, and rc.local is run by root. It all works mostly fine ( npm install tends to fail when run by root, but that's a whole 'nother topic), the problem arises when I try to use the git repository with a non-root user afterward... some of the git files have been written by root, so now I can't use it anymore (permission denied). So I'd like to recursively chown it back but there doesn't seem to be anything that really works on all of the little git files. I tried the -R flag and ./**/* path. My guess is that neither of those techniques descend into dot-directories. | The following command works for me in Ubuntu. It changed all the files and directories ownership recusively sudo chown -R someuser:somegroup YourDir | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12497/"
]
} |
222,258 | I'd like to cover the bases on a vulnerability which tries to download itself and save the result in a newly created directory inside the /tmp/ directory. To be on the safe side, I wish to make it impossible to create folders inside /tmp/ . Or if that is not feasible, I would like to prevent creating folders in just one specific directory inside /tmp . | use ls -l -d /tmp/ and you will see that the permissions are set to drwxrwxrwt , i.e. d : a directory, rwx : read, write and execute permissions allowed for owner, group and others (in this order), t sticky bit, i.e. only file owners are allowed to delete files (not the group despite permissions). Let's leave the sticky bit aside for the moment and mention that a directory needs to be executable for being accessible. Now if you want to restrict write permission for others (owner and group is root) then use chmod o-w /tmp/ (as root, i.e. using sudo ) HOWEVER: /tmp/ is rather important for may processes that need temporary data, so I would suggest not to restrict permissions for this folder at all! Since you are heading for a specific folder the simplest would be to manually create that folder (as root) and then restrict permission for it: sudo mkdir /tmp/badfoldersudo chmod -R o-w /tmp/badfolder/ Side note on chmod: -R do recursively, u,g,o: user,group,other , +- add/remove permission to r,w,x read,write,execute. I.e. for allowing gorup members to write to a file, use chmod g+w file . Update: In case the process is running as root, you also need to set the 'i' attribute. From man chattr A file with the `i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute. This would also apply if the folder was not owned by root. Simply use chattr +i /tmp/badfolder Use chattr -i /tmp/badfolder for removing it and -R for doing either recursively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122396/"
]
} |
222,283 | We have here a read only Bash variable. I am not allowed to unset that variable. $ echo $TMOUT1800 As a workaround I wrote those lines (that my session don't exit) #!/usr/bin/perl$|++;while (1) { print "\e[0n"; sleep 120; } Is there an official package (rpm) that does similar (like above Perl code) in a CentOS7/RHEL7 repository? I don't like to open up a vim editor, I wish a command. | You can issue perl commands from the command line... perl -e '$|++; while (1) { print "\e[0n"; sleep 120; }' or you could do the same in shell (a sh / bash example): while sleep 120; do printf '\33[0n'; done Or you could use watch : watch -n 120 printf '\33[0n' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26612/"
]
} |
222,333 | I am executing command ls > a.txt | sort > b.txt This command is doing the below things : executing ls sorting it creating a.txt and storing sorted output to a.txt creating b.txt , but its empty . Can anyone explain this ? I am implementing my own shell for which I need to understand this behavior & simulate it. | The | will take the output of the command on the left and give it to the input of the command on the right. The > operator will take the output of the command and put it into a file. That means, in your example, by the time it gets to the | there is no output left; it's all gone into a.txt . So the sort on the right operates on an empty string and saves that to b.txt What you would probably like is to use the tee command which will both write to a file and stdout like ls | tee a.txt | sort > b.txt Though I'm really curious what you're trying to do, since ls can/will sort things for you as well. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128036/"
]
} |
222,359 | I wrote a function that checks for a corrupted archive using a CRC checksum. To test it, I just opened the archive and scrambled the content with a hex editor. The problem is that I do not believe that this is the correct way to generate a corrupted file. Is there any other way to create a "controlled corruption", so it won't be totally random but can simulate what happens with real corrupted archives? I never had to corrupt something on purpose so I am not really sure how to do so, beside the random scrambling of data in a file. | I haven't done much fuzz testing either, but here's two ideas: Write some zeroes into the middle of the file. Use dd with conv=notrunc . This writes a single byte (block-size=1 count=1): dd if=/dev/zero of=file_to_fuzz.zip bs=1 count=1 seek=N conv=notrunc Using /dev/urandom as a source is also an option. Alternatively, punch multiple-of-4k holes with fallocate --punch-hole . You could even fallocate --collapse-range to cut out a page without leaving a zero-filled hole. (This will change the file size). A download resumed at the wrong place would match the --collapse-range scenario. An incomplete torrent will match the punch-hole scenario. (Sparse file or pre-allocated extents, either read as zero anywhere that hasn't been written yet.) Bad RAM (in the system you downloaded the file from) can cause corruption, and optical drives can also corrupt files (their ECC isn't always strong enough to recover perfectly from scratches or fading of the dye). DVD sectors (ECC blocks) are 2048B , but single byte or even single-bit errors can happen. Some drives will probably give you the bad uncorrectable data instead of a read-error for the sector, especially if you read in raw mode, or w/e it's called. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119523/"
]
} |
222,372 | I know that rm -f file1 will forcefully remove file1 without prompting me. I also know that rm -i file1 will first prompt me before removing file1 Now if you execute rm -if file1 , this will also forcefully remove file1 without prompting me. However, if you execute rm -fi file1 , it will prompt me before removing file1 . So is it true that when combining command options, the last one will take precedence ? like rm -if , then -f will take precedence, but rm -fi then the -i will take precedence. The ls command for example, it doesn't matter if you said ls -latR or ls -Rtal . So I guess it only matters when you have contradictory command options like rm -if , is that correct? | When using rm with both -i and -f options, the first one will be ignored. This is documented in the POSIX standard: -f Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of nonexistent operands. Any previous occurrences of the -i option shall be ignored. -i Prompt for confirmation as described previously. Any previous occurrences of the -f option shall be ignored. and also in GNU info page: ‘-f’‘--force’ Ignore nonexistent files and missing operands, and never prompt the user. Ignore any previous --interactive (-i) option.‘-i’ Prompt whether to remove each file. If the response is not affirmative, the file is skipped. Ignore any previous --force (-f) option. Let's see what happens under the hood: rm processes its option with getopt(3) , specifically getopt_long . This function will process the option arguments in the command line ( **argv ) in order of appearance: If getopt() is called repeatedly, it returns successively each of the option characters from each of the option elements. This function is typically called in a loop until all options are processed. From this functions perspective, the options are processed in order. What actually happens, however, is application dependent, as the application logic can choose to detect conflicting options, override them, or present an error. For the case of rm and the i and f options, they perfectly overwrite eachother. From rm.c : 234 case 'f':235 x.interactive = RMI_NEVER;236 x.ignore_missing_files = true;237 prompt_once = false;238 break;239 240 case 'i':241 x.interactive = RMI_ALWAYS;242 x.ignore_missing_files = false;243 prompt_once = false;244 break; Both options set the same variables, and the state of these variables will be whichever option is last in the command line. The effect of this is inline with the POSIX standard and the rm documentation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100193/"
]
} |
222,394 | I've got a few, quite silly, non-technical questions about giving codenames to Debian releases. Each Debian release has its unique codename, which is (so far) a characters' name from Toy Story movies by Pixar . Here is list of all assigned codenames so far: release 1.1 is buzz (Buzz Lightyear) - the spaceman, release 1.2 is rex - the tyrannosaurus, release 1.3.x is bo (Bo Peep) - the girl who took care of the sheep, release 2.0 is hamm - the piggy bank, release 2.1 is slink (Slinky Dog) - the toy dog, release 2.2 is potato - Mr. Potato, release 3.0 is woody - the cowboy, release 3.1 is sarge - the sergeant of the Green Plastic Army Men, release 4.0 is etch - the toy blackboard (Etch-a-Sketch), release 5.0 is lenny - the toy binoculars, release 6.0 is squeeze - the name for the three-eyed aliens, release 7.0 is wheezy - the name of the rubber toy penguin with a red bow tie, release 8.0 is jessie - the name of the yodelling cowgirl, release 9.0 is stretch - a purple rubbery octopus toy at Sunnyside Daycare , release 10.0 is buster - Andy 's pet dachshund (currently stable ), release 11.0 is bullseye - Woody 's horse. List of upcoming major Debian releases' codenames after bullseye : release 12.0 is bookworm - an intelligent worm toy with a built-in flashlight (currently testing ), release 13.0 is trixie - a blue plastic Triceratops. There are also: special codename sid ( S till I n D evelopment ) which is symbolic link to codename which is currently unstable , stable which is symbolic link to codename which is currently stable, testing which is symbolic link to codename which is currently testing. The list of Toy Story characters is quite robust but at some time, there will be no more characters' names to assign. My questions are: What codenames will be assigned if we run out of characters' names? Who decides what is codename of next release (please don't answer ambiguously like: 'community' )? How many releases' names are planned ahead? BTW: Interesting quote from debian.org/doc/manuals : The decision of using Toy Story names was made by Bruce Perens whowas, at the time, the Debian Project Leader and was working also at Pixar , the company that produced the movies. Infographics by Claudio Ferreira Filho (@filhocf) ( license : CC BY-SA 4.0 ). | I'll answer your questions out of order: the release team chooses code names (see their task description ), two releases ahead; the next three releases are Bullseye (Debian 11), Bookworm (Debian 12), and Trixie (Debian 13); and I don't think we're worried about running out of names yet... As pointed out by eyoung100 , Buster is Andy's dog. As you mention in your updated question, Bullseye is Woody's horse. Bookworm is the intelligent, flashlight-wielding worm toy from Toy Story 3 . Trixie is Bonnie’s triceratops from Toy Story 3 . Also, Sid is the name of the next-door kid who breaks all his toys . "Still in development" is a backronym. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28115/"
]
} |
222,440 | When using the command line, often it gets very cluttered. Making it inconvenient to examine past commands and their outputs for example. I would like to have a newline added each time before the command prompt is shown. Like so: <clutter><blank line>name@machine:~$ I use the bash shell. How can this be achieved? | One way to achieve this is by modifying the .bashrc file. Simply place the following at the end of the .bashrc file. PS1="\n$PS1" To explain how this works, PS1 is the variable containing what should be displayed as the prompt. All this is saying is "set PS1 to the previous contents of PS1 , with a newline character prepended". Putting it in .bashrc on most distros just makes bash run it every time you open an interactive shell (but not a login shell - see Difference between Login Shell and Non-Login Shell? ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85507/"
]
} |
222,473 | I have been trying for several days now (had to reinstall arch twice during), with setting up GPU passthrough on my pc without success. The hardware is Asus Z97-P Intel I5-4690 AMD Radeon R9 380 (catalyst sees it as R9 285) which should be capable of IOMMU. My computer runs Arch Linux. I have been following the following two articles on the topic: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF http://vfio.blogspot.hu/2015/05/vfio-gpu-how-to-series-part-3-host.html The Goal Unfortunately I only have one video card (and intel on-board) but I would be totally happy with starting the VM from the command line when I want to use Windows, otherwise I would like to just type startx to utilize the graphics card to the fglrx module. How I tried to achieve it I passed the intel_iommu=on option to initrd, which resulted in the following list using # find /sys/kernel/iommu_groups -type l/sys/kernel/iommu_groups/0/devices/0000:00:00.0/sys/kernel/iommu_groups/1/devices/0000:00:01.0/sys/kernel/iommu_groups/1/devices/0000:01:00.0/sys/kernel/iommu_groups/1/devices/0000:01:00.1/sys/kernel/iommu_groups/2/devices/0000:00:14.0/sys/kernel/iommu_groups/3/devices/0000:00:16.0/sys/kernel/iommu_groups/4/devices/0000:00:1a.0/sys/kernel/iommu_groups/5/devices/0000:00:1b.0/sys/kernel/iommu_groups/6/devices/0000:00:1c.0/sys/kernel/iommu_groups/6/devices/0000:00:1c.2/sys/kernel/iommu_groups/6/devices/0000:00:1c.3/sys/kernel/iommu_groups/6/devices/0000:03:00.0/sys/kernel/iommu_groups/6/devices/0000:04:00.0/sys/kernel/iommu_groups/7/devices/0000:00:1d.0/sys/kernel/iommu_groups/8/devices/0000:00:1f.0/sys/kernel/iommu_groups/8/devices/0000:00:1f.2/sys/kernel/iommu_groups/8/devices/0000:00:1f.3 which might mean that IOMMU is enabled successfully, but according to arch wiki it might not have been setup correctly (see last line of code): #dmesg|grep -e DMAR -e IOMMU[ 0.000000] ACPI: DMAR 0x00000000DDB41D40 000080 (v01 INTEL BDW 00000001 INTL 00000001)[ 0.000000] Intel-IOMMU: enabled[ 0.024745] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap d2008c20660462 ecap f010da[ 0.024747] IOAPIC id 8 under DRHD base 0xfed90000 IOMMU 0[ 0.296873] DMAR: No ATSR found[ 0.296964] IOMMU: dmar0 using Queued invalidation[ 0.296965] IOMMU: Setting RMRR:[ 0.296973] IOMMU: Setting identity map for device 0000:00:14.0 [0xdee7d000 - 0xdee8bfff][ 0.296996] IOMMU: Setting identity map for device 0000:00:1a.0 [0xdee7d000 - 0xdee8bfff][ 0.297012] IOMMU: Setting identity map for device 0000:00:1d.0 [0xdee7d000 - 0xdee8bfff][ 0.297024] IOMMU: Prepare 0-16MiB unity mapping for LPC[ 0.297029] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff][ 3.326568] AMD IOMMUv2 driver by Joerg Roedel <[email protected]>[ 3.326569] AMD IOMMUv2 functionality not available on this system I have tried the other options mentioned on the arch wiki site ( pass pci-stub to MODULES in mkinitcpio.conf) , but this last line persisted. First question: Is the first command's output sufficient to say that my system correctly utilizes IOMMU? With a bit of distrust, I have arrived at the part where I had to bind my VGA to pci-stub, which have led to some reinstall already, so I would like to have at least some directions to go from here, whether to use pci-stub or VFIO and such. | One way to achieve this is by modifying the .bashrc file. Simply place the following at the end of the .bashrc file. PS1="\n$PS1" To explain how this works, PS1 is the variable containing what should be displayed as the prompt. All this is saying is "set PS1 to the previous contents of PS1 , with a newline character prepended". Putting it in .bashrc on most distros just makes bash run it every time you open an interactive shell (but not a login shell - see Difference between Login Shell and Non-Login Shell? ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107849/"
]
} |
222,487 | I want to dynamically create a sequence of strings by manipulate an array of elements and create some arithmetic procedure. for name in FIRST SECOND THIRD FOURTH FIFTH; do $name = $(( $6 + 1 )) $name = "${$name}q;d" echo "${$name}"; printf "\n"done The desire outcome would be the below for $6 equals 0 . 1q;d2q;d3q;d4q;d5q;d But I get this error reel_first_part.sh: line 18: FIRST: command not foundreel_first_part.sh: line 19: ${$name}q;d: bad substitutionreel_first_part.sh: line 18: FIRST: command not foundreel_first_part.sh: line 19: ${$name}q;d: bad substitutionreel_first_part.sh: line 18: FIRST: command not foundreel_first_part.sh: line 19: ${$name}q;d: bad substitution I guess it's something simple. It used to work when I did something like FIRST=$(( $6 + 1 ))FIRST="${FIRST}q;d" | First of all there can not be any space around = in variable declaration in bash . To get what you want you can use eval . For example a sample script like yours : #!/bin/bashi=0for name in FIRST SECOND THIRD FOURTH FIFTH; do eval "$name"="'$(( $i + 1 ))q;d'" printf '%s\n' "${!name}" i=$(( $i + 1 ))done Prints : 1q;d2q;d3q;d4q;d5q;d Use eval cautiously, some people call it evil for some valid reason. declare would work too : #!/bin/bashi=0for name in FIRST SECOND THIRD FOURTH FIFTH; do declare "$name"="$(( $i + 1 ))q;d" printf '%s\n' "${!name}" i=$(( $i + 1 ))done also prints : 1q;d2q;d3q;d4q;d5q;d | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40491/"
]
} |
222,492 | How can I list the current directory or any directory path contents without using ls command? Can we do it using echo command? | printf '%s\n' * as a shell command will list the non-hidden files in the current directory, one per line. If there's no non-hidden file, it will display * alone except in those shells where that issue has been fixed (csh, tcsh, fish, zsh, bash -O failglob). echo * Will list the non-hidden files separated by space characters except (depending on the shell/echo implementation) when the first file name starts with - or file names contain backslash characters. It's important to note that it's the shell expanding that * into the list of files before passing it to the command. You can use any command here like, head -- * to display the first few lines (with those head implementations that accept several files), stat -- * ... I you want to include hidden files: printf '%s\n' .* * (depending on the shell, that will also include . and .. ). With zsh : printf '%s\n' *(D) Among the other applications (beside shell globs and ls ) that can list the content of a directory, there's also find : find . ! -name . -prune (includes hidden files except . and .. ). On Linux, lsattr (lists the Linux extended file attributes): lsattrlsattr -a # to include hidden files like with ls | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48646/"
]
} |
222,528 | With the example command man apropos > outputfile a text file is generated which contains the formatted man page of apropos (with some little differences with respect to man apropos directly printed on screen, such as bold characters). But I would like to manually set the maximum line width of the generated output file, so that all the paragraphs will be justified to that width. man pages are created through groff : for example, I tried to put .ll 50 before a paragraph of the original .gz man source text file, but it is trivial if I need to work on several man pages. Moreover not all the characters are recognized: apropos.1:45: warning: can't find character with input code 195apropos.1:45: warning: can't find character with input code 168apropos.1:47: warning: can't find character with input code 178apropos.1:131: warning: can't find character with input code 169 So, I wonder if a more straightforward method exists. How to modify the maximum line width, during the creation of an outputfile ? Is there some specific command? Edit : (All the following considerations are about Ubuntu 18.04: I can no more test them in previous versions, included the 14.04 of the above question.) As regards a one-line temporary solution, if MANWIDTH has not been already exported with a custom value, there is no difference between $ MANWIDTH=60 man apropos > outputfile and $ COLUMNS=60 man apropos > outputfile The first one, using MANWIDTH , is however better in principle. Edit 2 (not strictly related to the question): To make instead a permanent width setting to be applied to any manpage printing, it is necessary to export the desired value of the variable. With: $ export MANWIDTH=60# zero or more additional lines$ man apropos > outputfile man apropos will be printed with the same width regardless of any terminal window resizing. Instead, $ export COLUMNS=60# zero or more additional lines$ man apropos > outputfile will provide the same result as before only if the terminal window is not resized between export and man <page> > outputfile . | Use the MANWIDTH environment variable: MANWIDTH=60 man apropos > apropos.txt The manpage for man 2.7.4 says: If $MANWIDTH is set, its value is used as the line length for which manual pages should be formatted. If it is not set, manual pages will be formatted with a line length appropriate to the current terminal (using the value of $COLUMNS, an ioctl(2) if available, or falling back to 80 characters if neither is available). That is, it overrides both COLUMNS and the ioctl value. I prefer to not rely on modifying COLUMNS (although it does work here) since its value is updated dynamically every time the window size changes. Using MANWIDTH instead of COLUMNS also allows you to make the change permanent by adding a line such as export MANWIDTH=60 to your shell startup file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48707/"
]
} |
222,602 | I have a log file that was created by nobody : nogroup , which is activity being logged to, I wanted to emulate adding a message to that log file.My first thought was to: $ sudo su nobodyThis account is currently not available. | You have a way simpler solution, just run: su -s /bin/bash nobody (replace /bin/bash with the shell of your choice). The This account is currently not available. error is due to the fact that nobody user default shell is /usr/sbin/nologin , su -s force the system to use another shell. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
222,705 | I am trying to add field at the end of tag using sed script. Suppose I have a tag in XML file: <book name="Sed tutorial" price="250"/> Now I want to add field as Book_Width="A" after end of <book/> tag so that my tag becomes: <book name="Sed tutorial" price="250" Book_Width="A"/> I tried with sed : sed '/<book "[^>]*>/ a Book_Width="A"' but it gives: <book name="Sed tutorial" price="250"/>Book_Width="A" | You should not parse xml with sed , use an xml parser like xmlstarlet instead. For your task it would be: xmlstarlet ed -O --inplace --insert "/book" --type attr -n Book_Width -v A xml_file The file content is then: <book name="Sed tutorial" price="250" Book_Width="A"/> The ed means edit mode to edit the xml tree -O omits the xml tag We want to insert something with --insert "/book" is the path where to insert --type attr : it's an attribute, we want to insert The name -n of the attribute The value -v | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99822/"
]
} |
222,709 | I have a file str.txt with the following sample records. 31,2713810299,1,11-Aug-15 19:52:1032,2713810833,1,11-Aug-15 21:36:18 Now I want to print output with awk as below. cat str.txt|awk -F, '{print substr("$4",1,9)}' - The output should be: '11-Aug-15' '11-Aug-15' | a single quote would be \x27 awk -F, '{print "\x27"substr($4,1,9)"\x27" }' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125503/"
]
} |
222,722 | I have a large text file. I need to quickly pull a bunch of lines, say from #14600 to #14700, from this file, as a separate file. How it could be done? | Using sed sed -n 14600,14700p filename > newfile Where: p : Print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option. n : If auto-print is not disabled, print the pattern space, then, regardless, replace the pattern space with the next line of input. If there is no more input then sed exits without processing any more commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128256/"
]
} |
222,727 | I'm using Ubuntu on my personal laptop, and on my system, whenever I use rm on a file, it is gone. for good. The problem is on my university server. I am trying to delete a folder: environment/tests from my home directory.To my big surprise, when I use rm environment/tests (for some strange reason, rm does not require the -R option in order to delete a folder on the university server...), this is what I get: u2 **** 114 : rm environment/tests/bin/mv: cannot move `environment/tests/' to `/u/stud/****/../TrashCan/****/tests': File existsu2 **** 115 : (the **** are replacement for my username) So I tried to remove it from the Trash can, but realized it's a recursive call... :) u2 **** 157 : rm ~/../TrashCan/****/tests/bin/mv: `/u/stud/****/../TrashCan/****/tests' and `/u/stud/****/../TrashCan/****/tests' are the same fileu2 **** 158 : First of all, what does mv has to do here? (notice it is a /bin/mv error) Second, how can I delete this folder once and for all? In fact, while I'm at it, I'd like to completely empty the TrashCan.But again, this: u2 **** 169 : rm * ~/../TrashCan/ Does not work. The server runs the following version: u2 **** 170 : uname -aLinux u2 2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linuxu2 **** 171 : | The command rm as been aliased to /bin/mv -f !* /u/stud/****/../TrashCan/**** . Prefix the aliased command with \ to disable the alias: \rm , will run the original rm command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84791/"
]
} |
222,735 | How can I get hard disk capacity, usage, etc. using the /proc or /sys filesystems? If it is possible, please tell me which file(s) I need to process to get that information. | This is Answer cat /sys/block/sda/size Above file will returns some number like 312581808, then this number need to multiply by 512 standard block size then u ll get long int value in bytes , then u can convert to GB . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119746/"
]
} |
222,738 | I would like to give a user permissions to create and read files in a particular directory, but not to modify or delete files. If the user can append to files that is ok, but I'd rather not. This is on Ubuntu Linux. I think this is impossible with standard Unix file permissions, but perhaps this is possible using ACLs? The user will always be connecting using SFTP, so if there was some way to control this within SFTP (as opposed to OS permissions) that would be fine. To be absolutely clear, I want the following: echo hello > test # succeeds, because test doesn't exist, and creation is allowed echo hello >> test # can succeed or fail, depending on whether appending is allowed echo hello2 > test # fails, because test already exists, and modification is not allowed cat test # succeeds, because reads are allowed rm test # fails, because delete is not allowed If you're wondering why I want to do this, it's to make a Duplicati backup system resistant to Ransomware. | You could use bindfs like: $ ls -ld dirdrwxr-xr-t 2 stephane stephane 4096 Aug 12 12:28 dir/ That directory is owned by stephane, with group stephane (stephane being its only member). Also note the t that prevents users from renaming or removing entries that they don't own. $ sudo bindfs -u root -p u=rwD,g=r,dg=rwx,o=rD dir dir We bindfs dir over itself with fixed ownership and permissions for files and directories. All files appear owned by root (though underneath in the real directory they're still owned by stephane). Directories get drwxrwxr-x root stephane permissions while other types of files get -rw-r--r-- root stephane ones. $ ls -ld dirdrwxrwxr-t 2 root stephane 4096 Aug 12 12:28 dir Now creating a file works because the directory is writeable: $ echo test > dir/file$ ls -ld dir/file-rw-r--r-- 1 root stephane 5 Aug 12 12:29 dir/file However it's not possible to do a second write open() on that file as we don't have permission on it: $ echo test > dir/filezsh: permission denied: dir/file (note that appending is not allowed there (as not part of your initial requirements)). A limitation: while you can't remove or rename entries in dir because of the t bit, new directories that you create in there won't have that t bit, so you'll be able to rename or delete entries there. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128291/"
]
} |
222,750 | I'm having a little issue with yum. I'm trying to install a package but i realised yum it’s pulling it from the wrong repo (let’s call it repoB)(I have added my own repo(let’s call it repoA) in the yum.repo.d folder) when I look inside the yum.repo.d I do not see that repoB…? But when I run sudo yum repolist I can see that repoB listed. My question is, where is located this repoB ? How can I remove it ? Thanks! | Repos are either defined via .repo files in /etc/yum.repos.d or via plugins, which are usually defined via files in /etc/yum/pluginconf.d If you run yum repolist --noplugins does your repo in question still show up? If you want to know the URLs of the packages from your mistery repo, you can use yumdownloader --urls packagename to see the URLs. yumdownloader is contained in the package yum-utils . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95644/"
]
} |
222,754 | I have a two text files: string.txt and lengths.txt String.txt: abcdefghijklmnopqrstuvwxyz lengths.txt 54107 I want to get the file >Entry_1abcde>Entry_2fghi>Entry_3jklmnopqrs>Entry_4tuvwxyz I'm working with about 28,000 entries and they vary between 200 and 56,000 characters. At the moment, I'm using: start=1end=0i=0while read read_ldo let i=i+1 let end=end+read_l echo -e ">Entry_$i" >>outfile.txt echo "$(cut -c$start-$end String.txt)" >>outfile.txt let start=start+read_l echo $idone <lengths.txt But it's very inefficient. Any better ideas? | You can do { while read l<&3; do { head -c"$l" echo } 3<&- done 3<lengths.txt} <String.txt It requires some explanation: The main idea is to use { head ; } <file and is derived from the underestimated @mikeserv answer . However in this case we need to use many head s, so while loop is introduced and a little bit of tweaking with file descriptors in order to pass to head input from both files (file String.txt as a main file to process and lines from length.txt as an argument to -c option). The idea is that benefit in speed should come from not needing to seek through the String.txt each time a command like head or cut is invoked. The echo is just to print newline after each iteration. How much it is faster (if any) and adding >Entry_i between lines is left as an exercise. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128136/"
]
} |
222,772 | Is there a way to go from Kali light distro to full? Which packages do I need to install? | The sets of packages installed for the various flavours of Kali are defined in live-build-config . In this instance you need to look at the set of packages in Kali light and packages in Kali full : the latter adds kali-linux-full and kali-desktop-gnome . So to get all the utilities installed in Kali full: sudo apt-get install kali-linux-full If you want to install the GNOME 3 desktop used by default in Kali full: sudo apt-get install kali-desktop-gnome If you want to uninstall the XFCE desktop used by default in Kali light: sudo apt-get purge kali-desktop-xfce You can use the full set of utilities from the XFCE desktop, so you may want to try out the various possibilities before uninstalling anything. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/222772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61495/"
]
} |
222,793 | I am creating a script in Centos 7 to move the latest file within a directory to another directory. The original directory that I'm copying from contains a valid file, however when I try to move or copy the file it errors out saying the file does not exist. I know the file does exist as I prove below. Why does it fail and what I can do to fix it? If I run this line from my script in the shell the $( expands the output into the variable as expected: NEW=$(ls -Art /home/user/directory/ | tail -1) I can prove this to myself be echoing the value of the variable like so: echo $NEW file.tar.gz Then I try to move the file to a different directory: mv $NEW /usr/local/directory/ ..and this is where I get the error. Note that the error message explicitly names the file it cannot find: mv: cannot stat ‘file.tar.gz’: No such file or directory The shell appears to be telling me that it can't find the file and then naming the file it can't find. I have tried replacing the backticks with parentheses but same result. I have tried changing the permissions of both the file and the directories above it to pretty much every permutation I can think of and also changed ownership to user.user I have tried running the command as both root and user, same result each time. I will appreciate any attempt to help resolve this. | It looks that you are not in the directory where file is. You use ls -Art /home/user/directory/ which return into NEW only the filename part, not the directory part. Your move command should be mv "/home/user/directory/$NEW" /usr/local/directory/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106485/"
]
} |
222,810 | Suppose I have a file abc.csv with below data: abcdefgehijklmnopqrst Now, I want to insert blank lines after line2 and line 6. Please suggest which command shall I use. | $ seq 10 | sed '2G;6G'12345678910 The G sed command appends a newline followed by the content of the hold space (here empty as we don't put anything in it) to the pattern space. So it's a quick way to add an empty line below that matched line. Other alternatives are the a and s command: sed '2a\6a\' Or: sed '2s/$/\/6s/$/\/' Some sed implementation also support: sed '2s/$/\n/;6s/$/\n/' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128325/"
]
} |
222,889 | I'm having a lot of trouble at work trying to save a long list of echo outputs as a .txt file on my desktop. I am using Bash in Yosemite 10.10.4. I am still very new to Bash so any help and tips are appreciated. The goal is to print the name of the protocol used per brain scan, for a long list of brain scans. I used a for loop to recursively go through each brain scan, pull out the protocol used, then echo it and the path to the exact file used to acquire that information. My script: for i in /Path/to/scans/dofor file in "$i/"001*/0001.dcmdo# If there is no such file here, just skip this scan.if [ ! -f "$file" ]thenecho "Skipping $i, no 0001.dcm file here" >&2continuefi# Otherwise take the protocol data from scan outline= dcmdump +P 0040,0254 0001.dcm ## dcmdump is the command line tool needed to pull out this data. ## In my case I am saving to variable "line" the protocol used in ## the scan that this single 0001.dcm file belongs to (scans require ## many .dcm files but each one contains this kind of meta-data).# Print the resultecho "$line $file"breakdonedone So this script almost works. In my Terminal window, I do get a long list of protocols used, and the absolute filepath to the 0001.dcm file used for each scan. My problem is, when I change it to echo "$line $file" >> /Users/me/Desktop/scanparametersoutput.txt The text file that appears on my desktop is blank. Anyone have any idea about what I am doing wrong? | One problem you're having with your script is in this line: line= dcmdump +P 0040,0254 0001.dcm Instead of assigning the output of dcmdump to line , it is running your dcmdump command with an environment variable called line set to '' . You can read more about this here . So what you're actually seeing is the output of dcmdump being run by your script, not the output of $line , since $line isn't being assigned anything. To capture the output of a program, use the syntax line=$(dcmdump +P 0040,0254 0001.dcm) (Note also that there is no space before or after the = sign to be safe.) $() runs the code within the parentheses in a subshell and then 'replaces' itself with that the output of that code. You probably want 0001.dcm within the dcmdump command to be $file instead as well, but I'm not familiar with it, so I'll leave that to you. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119846/"
]
} |
222,944 | I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server? I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu. local $ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 14.04.3 LTSRelease: 14.04Codename: trusty I changed the /etc/fuse.conf local $ sudo cat /etc/fuse.conf# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)# Set the maximum number of FUSE mounts allowed to non-root users.# The default is 1000.#mount_max = 1000# Allow non-root users to specify the allow_other or allow_root mount options.user_allow_other And my user is in the fuse group local $ sudo grep fuse /etc/groupfuse:x:105:MY_LOACL_USERNAME And I mount the remote dir with (tried with/without combinations of sudo, default_permissions, allow_other): local $sudo sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ /mnt/LOCAL_DIR_NAME/ The REMOTE_USERNAME has write permissions to the dir/files (on the remote server). I tried the above command without sudo, default_permissions, and in all cases I get: local $ ls -al /mnt/LOCAL_DIR_NAME/a_file-rw-rw-r-- 1 699 699 1513 Aug 12 16:08 /mnt/LOCAL_DIR_NAME/a_filelocal $ test -w /mnt/LOCAL_DIR_NAME/a_file && echo "Writable" || echo "Not Writable"Not Writable Clarification 0 In response to user3188445's comment: $ whoamiLOCAL_USER$ cd$ mkdir test_mnt$ sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ test_mnt/$ ls test_mnt/I see the contents of the dir correctly$ ls -al test_mnt/total 216drwxr-xr-x 1 699 699 4096 Aug 12 16:42 .drwxr----- 58 LOCAL_USER LOCAL_USER 4096 Aug 17 15:46 ..-rw-r--r-- 1 699 699 2557 Jul 30 16:48 sample_filedrwxr-xr-x 1 699 699 4096 Aug 11 17:25 sample_dir$ touch test_mnt/new_file touch: cannot touch ‘test_mnt/new_file’: Permission denied# extra info: SSH to the remote host and check file permissions$ ssh REMOTE_USERNAME@REMOTE_HOST# on remote host$ ls -al /remote/dir/path/lrwxrwxrwx 1 root root 18 Jul 30 13:48 /remote/dir/path/ -> /srv/path/path/path/$ cd /remote/dir/path/$ ls -altotal 216drwxr-xr-x 26 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 12 13:42 .drwxr-xr-x 4 root root 4096 Jul 30 14:37 ..-rw-r--r-- 1 REMOTE_USERNAME REMOTE_USERNAME 2557 Jul 30 13:48 sample_filedrwxr-xr-x 2 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 11 14:25 sample_dir | The question was answered in a linux mailing list ; I post a translated answer here for completeness. Solution The solution is to not use both of the options default_permissions and allow_other when mounting (which I didn't try in my original experiments). Explanation The problem seems to be quite simple. When you use the option default_permissions in fusermount then fuse's permission control of the fuse mount is handled by the kernel and not by fuse . This means that the REMOTE_USER's uid/gid aren't mapped to the LOCAL_USER (sshfs.c IDMAP_NONE). It works the same way as a simple nfs fs without mapping. So, it makes sense to prohibit the access, if the uid/gid numbers don't match. If you have the option allow_other then this dir is writable only by the local user with uid 699, if it exists. From fuse's man: 'default_permissions' By default FUSE doesn't check file access permissions, the filesystem is free to implement its access policy or leave it to the underlying file access mechanism (e.g. in case of network filesystems). This option enables permission checking, restricting access based on file mode. It is usually useful together with the 'allow_other' mount option.'allow_other' This option overrides the security measure restricting file access to the user mounting the filesystem. This option is by default only allowed to root, but this restriction can be removed with a (userspace) configuration option. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128564/"
]
} |
222,974 | When I want to ask for a password in a bash script, I do that : read -s ...but when I run bash in POSIX mode, with sh , the -s option is rejected: $ read -ssh: 1: read: Illegal option -s How do I securely ask for an input with a POSIX-compliant command ? | read_password() { REPLY="$( # always read from the tty even when redirected: exec < /dev/tty || exit # || exit only needed for bash # save current tty settings: tty_settings=$(stty -g) || exit # schedule restore of the settings on exit of that subshell # or on receiving SIGINT or SIGTERM: trap 'stty "$tty_settings"' EXIT INT TERM # disable terminal local echo stty -echo || exit # prompt on tty printf "Password: " > /dev/tty # read password as one line, record exit status IFS= read -r password; ret=$? # display a newline to visually acknowledge the entered password echo > /dev/tty # return the password for $REPLY printf '%s\n' "$password" exit "$ret" )"} Note that for those shells (ksh88, mksh and most other pdksh-derived shells) where printf is not builtin, the password would appear in clear in the ps output (for a few microseconds) or may show up in some audit logs if all command invocations with their parameters are audited. In those shells however, you can replace it with print -r -- "$password" . In any case echo is generally not an option . Another POSIX-compliant one that doesn't involve revealing the password in the ps output (but might end up having it written onto permanent storage) is: cat << EOF$passwordEOF Also note that zsh's IFS= read -rs 'pass?Password: ' or bash's IFS= read -rsp 'Password: ' pass issue the Password: prompt on stderr. So with those, you might want to add a 2> /dev/tty to make sure the prompt goes to the controlling terminal. In any case, make sure you don't forget the IFS= and -r . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/222974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128457/"
]
} |
222,999 | This answer to the question Non-Root Package Managers suggests Nix as a solution. However, the installation documentation says : The script will invoke sudo to create /nix if it doesn’t already exist. If you don’t have sudo, you should manually create /nix first as root . I don't have permissions to do either on a target machine. Does that mean that there is no way for me to install and therefore use Nix unless sysadmin agrees to install it? Does the same apply to Guix ? | You can try it installing nix using PRoot . Or you can build for your custom prefix: NIX_STORE_DIR=/opt/custom/store \NIX_STATE_DIR=/opt/custom/var/nix \NIX_DB_DIR=/opt/custom/var/nix/db \ nix-build ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/222999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
223,037 | My script (status.sh) is: #!/bin/bashSITE=http://www.example.org/STATUS=$(/usr/bin/curl -s -o /dev/null -I -w "%{http_code}" $SITE)if [ $STATUS -eq 200 ]then echo $STATUS >> /home/myuser/mysite-up.logelse echo $STATUS >> /home/myuser/mysite-down.logfi I run: $ chmod +x /home/myuser/status.sh Then on my crontab i got: * * * * * /home/myuser/status.sh When I run: $ /home/myuser/status.sh The file /home/myuser/mysite-up.log contains: 200 But when cron run, the file /home/myuser/mysite-up.log contains: 000 What I am doing wrong? EDIT: I modified the script adding: set -x as @Sobrique suggested and I the output is: SITE=http://www.example.org/ /usr/bin/curl -s -o /dev/null -I -w '%{http_code}' http://www.example.org/ STATUS=000 '[' 000 -eq 200 ']' echo 000 | The main difference between running a script on the command line and running it from cron is the environment. If you get different behavior, check if that behavior might be due to environment variables. Cron jobs run with only a few variables set, and those not necessarily to the same value as in a logged-in session (in particular, PATH is often different). If you want your environment variables to be set, either declare them in ~/.pam_environment (if your system supports it) or add . ~/.profile && at the beginning of the cron job (if you declare them in .profile ). See also What's the best distro/shell-agnostic way to set environment variables? In this case, a 000 status from curl indicates that it could not connect to the server. Usually, the network connection is system-wide, so networking behaves the same in cron. However one thing that's indicated by environment variables is any proxy use. If you need a proxy to connect to the web and you've set the environment variable http_proxy in a session startup script, that setting isn't applied in your cron job, which would explain the failure. Add the option -S to you curl invocation to display error messages (while retaining -s to hide other messages). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81296/"
]
} |
223,058 | I have a big report with many IP address shown in random lines. All the IP address start out with 192.168. I would like to extract only the Ip addresses and get a report that looks like: 192.anything.anything.anything 192.xx.xx.xx 192.xx.xx.xx And nothing else. I tried cat filename | grep -w 192 that seems to get the whole line. I only want the full IP address. I appreciate any information you can share with me. | I do this with egrep -o or grep -E -o The -E flag in grep activates regex (which is what egrep does by default), and the -o flag prints only the matching string. grep -E -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' /path/to/log192.168.1.11 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126111/"
]
} |
223,060 | I want to print the countries which which have to after ~: . welcome~:to~:Germanywelcome~:no please~:Italywelcome~:to~:Brazilwelcome~:not ok~:China Note: I do not know what will be there other than " to ". It can be anything and can be changed (like no please , not ok ). I am tried using cut, awk, sed. But I am unable to figure out. awk -F "~:" '{print $2 $NF}' But I get output like: toGermanyno pleaseItalytoBrazilnot okChina How to filter countries other than to . UPDATE: Alternative solution:(Figured out with cut) :-) grep -v "to" |cut -d ':' -f3 | As you are using ~: as field separator you can check if second field is equal to to and to print third field which is the country: awk -F"~:" '$2 == "to" { print $3; }' file Result: GermanyBrazil | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126218/"
]
} |
223,086 | For example #!/bin/bashINT=-5if [[ "$INT" =~ ^-?[0-9]+$ ]]; then echo "INT is an integer."else echo "INT is not an integer." >&2 exit 1fi When I delete ">&2". There is nothing different. Why do I need to add ">&2" | The difference is: echo "INT is an integer." writes to standard-out, and echo "INT is not an integer." >&2 writes to standard-error. In Unix-world, stdout is generally used when everything is working correctly and stderr is generally used to print messages when something goes wrong. By default, stdout and stderr both print to your screen. The main difference is that the > and | operators catch stdout by default, but not stderr . So if you had your script in the middle of a pipeline, INT is an integer. would continue down the pipeline and INT is not an integer would print to your screen instead of going into the pipeline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128532/"
]
} |
223,151 | I started to add some basic iptables rules on my Debian Jessie server. My objective is to filter and log network traffic (for security and learning purposes). Disregarding ICMP packets, these are the rules I'm using: # INPUT-A INPUT -i lo -j ACCEPT-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT-A INPUT -p tcp -j REJECT --reject-with tcp-reset# OUTPUT-A OUTPUT -o lo -j ACCEPT-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT-A OUTPUT -p tcp -m tcp --dport 25 -j ACCEPT-A OUTPUT -m limit -j LOG --log-prefix "UNKNOWN_OUTGOING: " --log-level 5 The policy is set to ACCEPT for both INPUT and OUTPUT. Now the log frequently lists outgoing RST packets, usually to port 80. The SRC IP here belongs to my server, the destination IP is partially edited out to not disclose other people activities. Aug 14 11:48:37 reynholm kernel: [81795.100496] UNKNOWN_OUTGOING: IN= OUT=ifext SRC=89.238.65.123 DST=108.162.[edited] LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=3594 DPT=80 WINDOW=0 RES=0x00 RST URGP=0 I don't understand what's causing this, there are no applications running other than SSH and an MTA. Is it because of my input REJECT rule? But shouldn't those packets then be handled by the output state rule? Below is a capture of one of those packets together with the connection attempt apparently triggering it. No packet was sent between my server and 108.162.[edited] before this. 11:48:37.860337 IP (tos 0x0, ttl 60, id 0, offset 0, flags [DF], proto TCP (6), length 44) 108.162.[edited].80 > 89.238.65.123.3594: Flags [S.], cksum 0x79bb (correct), seq 79911989, ack 235561828, win 29200, options [mss 1460], length 0 0x0000: 4500 002c 0000 4000 3c06 7342 6ca2 0000 E..,..@.<.sBl... 0x0010: 59ee 417b 0050 0e0a 04c3 5c35 0e0a 6364 Y.A{.P....\5..cd 0x0020: 6012 7210 79bb 0000 0204 05b4 0000 `.r.y.........11:48:37.860408 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 89.238.65.123.3594 > 108.162.[edited].80: Flags [R], cksum 0x648e (correct), seq 235561828, win 0, length 0 0x0000: 4500 0028 0000 4000 4006 6f46 59ee 417b E..(..@[email protected]{ 0x0010: 6ca2 0000 0e0a 0050 0e0a 6364 0000 0000 l......P..cd.... 0x0020: 5004 0000 648e 0000 P...d... | The creation of the TCP RST packet is from your rule -A INPUT -p tcp -j REJECT --reject-with tcp-reset The default policy (ACCEPT in your case) only applies to packets that do not match any of the rules in your chain. If a packet matches the rule above with the REJECT target, it will not be subject to the default policy and will be REJECTed (and generate a TCP RST) rather than ACCEPTed. This TCP RST will not match your rule: -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT because it is not RELATED to another established connection and it is not part of an ESTABLISHED connection. It will continue through your rules and match -A OUTPUT -m limit -j LOG --log-prefix "UNKNOWN_OUTGOING: " --log-level 5 and end up in your log. If you do not want to log these RST packets, either adjust this rule to not match them or insert an earlier rule to match the RST packets and so something with them before they get here. Something else I'm noticing is that the first packet you are logging is a SYN/ACK packet from a remote webserver, which looks like a response packet from the remote webserver to a SYN packet you would have earlier sent to begin the connection to the remote host on port 80. If you didn't send an initial SYN, I don't think the connection would match 'ESTABLISHED', but if you did send a SYN then I think the connection should mach 'ESTABLISHED'. This could be messing with which rule your RST ends up matching. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128583/"
]
} |
223,182 | I tried following shell script which should replace spaces from all xml filenames for xml_file in $(find $1 -name "* .xml" -type f);do echo "removing spaces from XML file:" $xml_file mv "$xml_file" "${xml_file// /_}";done Suppose, I have xml file with the name xy z.xml , then it gives: removing spaces from XML file: /home/krishna/test/xymv: cannot stat `/home/krishna/test/xy': No such file or directoryremoving spaces from XML file: .xmlmv: cannot stat `z.xml': No such file or directory | Use this with bash : find $1 -name "* *.xml" -type f -print0 | \ while read -d $'\0' f; do mv -v "$f" "${f// /_}"; done find will search for files with a space in the name. The filenames will be printed with a nullbyte ( -print0 ) as delimiter to also cope with special filenames.Then the read builtin reads the filenames delimited by the nullbyte and finally mv replaces the spaces with an underscore. EDIT: If you want to remove the spaces in the directories too, it's a bit more complicated. The directories are renamed and then not anymore accessible by the name find finds. Try this: find -name "* *" -print0 | sort -rz | \ while read -d $'\0' f; do mv -v "$f" "$(dirname "$f")/$(basename "${f// /_}")"; done The sort -rz reverses the file order, so that the deepest files in a folder are the first to move and the folder itself will be the last one. So, there are never folders renamed before all files and folder are rename inside of it. The mv command in the loop is a bit changed too. In the target name, we only remove the spaces in the basename of the file, else it wouldn't be accessible. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99822/"
]
} |
223,211 | I tried following shell script to remove all spaces with underscores: find $1 -depth -name "* *" -print0 | \while read -d $'\0' f; do mv -v "$f" "${f// /_}"; done If I have a directory /home/user/g h/y h/u j/ it will modify y h directory to y_h and then it will give an error for /home/user/g h/y h/u j : No such file or directory | Use this: find -name "* *" -print0 | sort -rz | \ while read -d $'\0' f; do mv -v "$f" "$(dirname "$f")/$(basename "${f// /_}")"; done find will search for files and folders with a space in the name. This will be printed ( -print0 ) with nullbytes as delimiters to cope with special filenames too. The sort -rz reverses the file order, so that the deepest files in a folder are the first to move and the folder itself will be the last one. So, there are never folders renamed before all files and folder are rename inside of it. Finally, the mv command renames the file/folder. In the target name, we only remove the spaces of the files basename, else it wouldn't be accessible anymore. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99822/"
]
} |
223,275 | I have a large text file (~500K lines) with short sentences (couple of words long). Additionally, there is some XML markup in most of the lines. Finally, the text file has been sorted before the markup has been added! Adding the XML markup changes the alphabetic sort but this is desired. My question is: How can I print random lines respecting the order of the source file? I know I could just use the shuf command and sort the result. The problem is that the markup will mess up the sort. I could also write a python script which loads the text file in a list, generates some random numbers, sorts them and uses them as indices to pull out the lines. If possible, I would prefer standard *nix command-line tools. Sample data: <CITY>anaconda</CITY> city is in <STATE>montana</STATE>let's go to <CITY>rome</CITY>please find <CITY>berlin</CITY>where is <CITY>cairo</CITY> in <COUNTRY>egypt</COUNTRY> For example, it would be great if I could pull out the line 2 and 3. Lines 1,3 and 4 are also good. If I get the line 3, 1 and 4, this is not good. | Use this: nl file | shuf -n2 | sort -n | cut -f2- nl to number the lines, shuf to shuffle and limit the output to 2 lines ( -n ), sort to rebuild the original order, and cut to remove the numeration of nl . It will print 2 lines of your file in the original order of the file. Use shuf -n X , where X can be any number. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104493/"
]
} |
223,276 | So ssh has the option HostKeyAlgorithms . Sample usage: ssh -o "HostKeyAlgorithms ssh-rsa" user@hostname I'm trying to get the client to connect using the servers ecdsa key, but I can't find what the correct string is for that. What command can I use to get a list of the available HostKeyAlgorithms ? | ssh -Q key Unless you have an ancient version of OpenSSH, in which case uhhhh source dive, or run ssh -v -v -v ... and see if what you want appears there. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/223276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125142/"
]
} |
223,300 | System log files are serialized and I use ls -lrt to show me the most recent file. I then cat that file. This requires typing a long serial number each time. How can I cat the last file appearing in my ls -lrt output in one command? I'm using cygwin and the the output from ls -lrt foobar_job* look like this: - -rw-r--r-- 1 zundarz Domain Users 1133 Jul 31 16:54 foobar_job4855125.log-rw-r--r-- 1 zundarz Domain Users 1256 Jul 31 17:10 foobar_job4855127.log-rw-r--r-- 1 zundarz Domain Users 1389 Aug 11 10:20 foobar_job4887829.log-rw-r--r-- 1 zundarz Domain Users 1228 Aug 11 10:39 foobar_job4887834.log | If you're going to just cat a newest file in one command you don't really need -l option. On Linux and Cygwin you can use -1 option and make parsing much easier: $ cat "$(ls -1rt | tail -n1)" -1 should be very portable, it's specified in POSIX . Also keep in mind that parsing ls output has its drawbacks . EDIT: As correctly noted in a comment by don_crissti you don't even need -1 : $ cat "$(ls -rt | tail -n1)" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128687/"
]
} |
223,310 | I was reading about the ext4 filesystem that it supports compression, encryption and a feature called extents which is used to save disk space. What are extents and how are they are effective for saving disk space? | Extents reduce the amount of metadata needed to keep track of the data blocks for large files. Instead of storing a list of every individual block which makes up the file, the idea is to store just the address of the first and last block of each continuous range of blocks. These continuous ranges of data blocks (and the pairs of numbers which represent them) are called extents . The addresses of a file's first few data blocks are stored in the inode, but since the inode has a fixed size, this only works for small files. In ext2 or ext3, large files require the use of indirect blocks to store the rest of the list of block addresses which won't fit in the inode itself. That is, the inode contains the address of a block which itself contains a list of blocks. These are called indirect blocks . These extra blocks are not usually needed when using extents, because storing an extent takes a constant amount of space regardless of how big a range of blocks it describes. A very fragmented file might still need extra metadata blocks (which ext4 calls extent nodes ) to store a long list of extents, but typically still much fewer than would be needed otherwise. The reduction in metadata size is usually quite small in proportion to the size of the file, though. The main motivation for extents is about improving performance (by reducing fragmentation and having fewer metadata blocks to read and write) rather than saving space per se. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128366/"
]
} |
223,351 | How can I see the raw memory data used by an application ? Like , suppose I have a file name something.sh . Now I run the command ./something.sh , and then I want see all the data its accessing in ram and all the files its accessing in my filesystem,network data or connection its using.May be the hex dump of the memory used by this application. Can I do that in ubuntu ? | How can I see the raw memory data used by an application... Once you have obtained the process' PID (using ps(1) or pidof(8) for instance), you may access the data in its virtual address space using /proc/PID/maps and /proc/PID/mem . Gilles wrote a very detailled answer about that here . ... and all the files its accessing in my filesystem, network data or connections lsof can do just that. netstat may be more appropriate for network-related descriptors. For instance : $ netstat -tln # TCP connections, listening, don't resolve names.$ netstat -uln # UDP endpoints, listening, don't resolve names.$ netstat -tuan # TCP and UDP, all sorts, don't resolve names.$ lsof -p PID # "Files" opened by process PID. Note : netstat 's -p switch will allow you to print the process associated with each line (at least, your processes). To select a specific process, you can simply use grep : $ netstat -tlnp | grep skype # TCP, listening, don't resolve (Skype). For more information about these tools: netstat(8) and lsof(8) . See also: proc(5) (and the tools mentionned in other answers) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42741/"
]
} |
223,385 | On 32-bit Linux systems, invoking this $ /lib/libc.so.6 and on 64-bit systems this $ /lib/x86_64-linux-gnu/libc.so.6 in a shell, provides an output like this: GNU C Library stable release version 2.10.1, by Roland McGrath et al.Copyright (C) 2009 Free Software Foundation, Inc.This is free software; see the source for copying conditions.There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR APARTICULAR PURPOSE.Compiled by GNU CC version 4.4.0 20090506 (Red Hat 4.4.0-4).Compiled on a Linux >>2.6.18-128.4.1.el5<< system on 2009-08-19.Available extensions: The C stubs add-on version 2.1.2. crypt add-on version 2.1 by Michael Glad and others GNU Libidn by Simon Josefsson Native POSIX Threads Library by Ulrich Drepper et al BIND-8.2.3-T5B RT using linux kernel aioFor bug reporting instructions, please see:<http://www.gnu.org/software/libc/bugs.html>. Why and how does this happen, and how is it possible to do the same in other shared libraries? I looked at /usr/lib to find executables, and I found /usr/lib/libvlc.so.5.5.0 . Running it led to a segmentation fault . :-/ | That library has a main() function or equivalent entry point, and was compiled in such a way that it is useful both as an executable and as a shared object. Here's one suggestion about how to do this, although it does not work for me. Here's another in an answer to a similar question on S.O , which I'll shamelessly plagiarize, tweak, and add a bit of explanation. First, source for our example library, test.c : #include <stdio.h> void sayHello (char *tag) { printf("%s: Hello!\n", tag); } int main (int argc, char *argv[]) { sayHello(argv[0]); return 0; } Compile that: gcc -fPIC -pie -o libtest.so test.c -Wl,-E Here, we are compiling a shared library ( -fPIC ), but telling the linker that it's a regular executable ( -pie ), and to make its symbol table exportable ( -Wl,-E ), such that it can be usefully linked against. And, although file will say it's a shared object, it does work as an executable: > ./libtest.so ./libtest.so: Hello! Now we need to see if it can really be dynamically linked. An example program, program.c : #include <stdio.h>extern void sayHello (char*);int main (int argc, char *argv[]) { puts("Test program."); sayHello(argv[0]); return 0;} Using extern saves us having to create a header. Now compile that: gcc program.c -L. -ltest Before we can execute it, we need to add the path of libtest.so for the dynamic loader: export LD_LIBRARY_PATH=./ Now: > ./a.outTest program../a.out: Hello! And ldd a.out will show the linkage to libtest.so . Note that I doubt this is how glibc is actually compiled, since it is probably not as portable as glibc itself (see man gcc with regard to the -fPIC and -pie switches), but it demonstrates the basic mechanism. For the real details you'd have to look at the source makefile. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37799/"
]
} |
223,408 | I was slightly confused by: % vim tmpzsh: suspended vim tmp% kill %1% jobs[1] + suspended vim tmp% kill -SIGINT %1% jobs[1] + suspended vim tmp% kill -INT %1% jobs[1] + suspended vim tmp So I resigned to just "do it myself" and wonder why later: % fg[1] - continued vim tmpVim: Caught deadly signal TERMVim: Finished.zsh: terminated vim tmp% Oh! Makes sense really, now that I think about it, that vim has to be running in order for it's signal handler to be told to quit, and to do so. But obviously not what I intended. Is there a way to "wake and quit" in a single command? i.e., a built-in alias for kill %N && fg %N ? Why does resuming in the background not work? If I bg instead of fg , Vim stays alive until I fg , which sort of breaks my above intuition. | vi-vi-vi is of the devil. You must kill it with fire. Or SIGKILL : kill -KILL %1 The builtin kill s are kind enough to send SIGCONT to suspended processes so that you don't have to do it yourself, but that won't help if the process blocks the signal you're sending or if handling the signal causes the processes to become suspended again (if a background process tries to read from the terminal, by default, it'll be sent SIGTTIN , which suspends the process if unhandled). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/223408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62835/"
]
} |
223,432 | My method for disowning the foreground process takes too much effort. Suppose I have a process in zsh 's foreground. I want to disown it, so I can close the shell without the process being sent a SIGHUP . At the moment, I start with Ctrl + z to background and pause the process, then $ disowndisown: warning: job is suspended, use `kill -CONT -32240' to resume$ kill -CONT -32240$ — then I can close the terminal. How can I automate that? Ideally, I'd like to be able to press Ctrl + j or something to immediately disown the running process. Or second best, I'd want to be able to run a single command to both disown and SIGCONT the process once it's suspended. | Having a global keybind to disown the foreground process is impossible: Keystrokes are received by the foreground process, not by the shell. You need to first suspend it with Ctrl + z if you want to disown it. However, turns out there's a zsh option to speed up disowning then continuing : With setopt AUTO_CONTINUE , disown will automatically also send SIGCONT . So you can get it down to C-z disown . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16404/"
]
} |
223,469 | Background I own a domain with a catch-all, so that all email sent to *@foo.bar goes to one account. I have unique usernames for particular organisations, e.g. [email protected] . If an email address is compromised and I start receiving spam, I can delete the account, creating a new one at [email protected] . (This works very well; I've deleted about 30 email addresses in 7 years, and receive zero spam.) Mutt functionality I'm thinking about moving from Thunderbird to Mutt as my email client. However, one Thunderbird add-on that I use extensively is Virtual Identity . This allows me to manually type in the sender address , and can also automatically modify this address in two ways. It saves a database of previous recipients linked with the previously-used sender address. Next time I send an email to a particular recipient, it will automatically fill the sender field with the previously-used address . If the recipient is new, and I reply to an email, then it will automatically fill the sender field with the address the original email was sent to . Is there a way for Mutt to do these three functions (in bold above)? I understand that the final point is somewhat possible , although that solution requires setting up a list of potential sender addresses, rather than automatically allowing all senders in *@foo.bar . | You can configure mutt to use different from addresses (via your ~/.muttrc ), e.g.: set use_from = yesset envelope_from = yesset from = [email protected] realname = "Default Realname"# list of all your addressesalternates @example\.org$ You can setup some macros to explicitly switch the from before composing a new mail: macro index \e1 "set [email protected]\n" "Select foo address"macro index \e2 "set [email protected]\n" "Select bar address"# ... When replying to an email, you can configure mutt to automatically use the to-header as from address (this is point 2 from your question): set reverse_name=yes Don't reuse the real name - helps when people send you crap like "[email protected]" <[email protected]> : set reverse_realname=no Then you can set up some hooks to make things depend on header values - e.g. to use different fcc folders: fcc-hook '~f ^foo@example\.org' '=foo' (There are also other hooks, like send-hook etc.) I would look into the hooks to implement something like point 1 from your question. Although, you would need some external scripting to maintain such a database. Depending on your current MTA setup you may have to change its config as well, i.e. such that it accepts different envelope froms. It is also possible to use different SMTP relays depending on e.g. the hostname of the envelope from - but this must be configured in the MTA. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
223,503 | In my bash script I'm trying to print a line if a certain string does not exist in a file. if grep -q "$user2" /etc/passwd; then echo "User does exist!!" This is how I wrote it if I wanted the string to exist in the file but how can I change this to make it print "user does not exist" if the user is not found in the /etc/passwd file? | grep will return success if it finds at least one instance of the pattern and failure if it does not. So you could either add an else clause if you want both "does" and "does not" prints, or you could just negate the if condition to only get failures. An example of each: if grep -q "$user2" /etc/passwd; then echo "User does exist!!"else echo "User does not exist!!"fiif ! grep -q "$user2" /etc/passwd; then echo "User does not exist!!"fi | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/223503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128833/"
]
} |
223,513 | I tried method below to install XFCE and was expecting the XFCE desktop to pop up during next reboot: apt-get install kali-defaults kali-root-login desktop-base xfce4 xfce4-places-plugin xfce4-goodies The XFCE window manager is installed but, however, I see no changes made to the window interface. How do I resolve this on Kali 2.0 (sana)? | Use the command update-alternatives --config x-session-manager . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128839/"
]
} |
223,533 | I'm working under Raspbian (for RaspberryPi): Linux version 3.18.14-v7+ (root@vagrant-ubuntu-trusty-32) (gcc version 4.8.3 20140106 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.01 - Linaro GCC 2013.11) ) nb : I'm connecting to the pi from my laptop using a ssh session. When trying to solve this problem here: How to fix perl : warning : setting local failed I run sudo dpkg-reconfigure localesperl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = "en_US.UTF-8", LC_ALL = "en_US.UTF-8", LC_PAPER = "fr_FR.UTF-8", LC_ADDRESS = "fr_FR.UTF-8", LC_MONETARY = "fr_FR.UTF-8", LC_NUMERIC = "fr_FR.UTF-8", LC_TELEPHONE = "fr_FR.UTF-8", LC_IDENTIFICATION = "fr_FR.UTF-8", LC_MEASUREMENT = "fr_FR.UTF-8", LC_TIME = "fr_FR.UTF-8", LC_NAME = "fr_FR.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").locale: Cannot set LC_CTYPE to default locale: No such file or directorylocale: Cannot set LC_MESSAGES to default locale: No such file or directorylocale: Cannot set LC_ALL to default locale: No such file or directorydpkg-query: package 'locales' is not installed and no information is availableUse dpkg --info (= dpkg-deb --info) to examine archive files,and dpkg --contents (= dpkg-deb --contents) to list their contents./usr/sbin/dpkg-reconfigure: locales is not installed locales is not installed so I run this command to install locales sudo apt-get install localesReading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: apache2-mpm-prefork : Depends: apache2.2-bin (= 2.2.22-13+deb7u5) but it is not going to be installed apache2.2-common : Depends: apache2.2-bin (= 2.2.22-13+deb7u5) but it is not going to be installed Depends: apache2-utils but it is not going to be installed Depends: procps but it is not going to be installed Depends: perl but it is not going to be installed Recommends: ssl-cert but it is not going to be installed libbz2-1.0 : PreDepends: multiarch-support but it is not going to be installed libc6 : Breaks: locales (< 2.19) libcomerr2 : PreDepends: multiarch-support but it is not going to be installed libdb5.1 : PreDepends: multiarch-support but it is not going to be installed libgcc1 : PreDepends: multiarch-support but it is not going to be installed libgssapi-krb5-2 : Depends: libkeyutils1 (>= 1.4) but it is not going to be installed Depends: libkrb5support0 (>= 1.12~alpha1+dfsg) but it is not going to be installed PreDepends: multiarch-support but it is not going to be installed libk5crypto3 : Depends: libkeyutils1 (>= 1.4) but it is not going to be installed Depends: libkrb5support0 (>= 1.12~alpha1+dfsg) but it is not going to be installed PreDepends: multiarch-support but it is not going to be installed libkrb5-3 : Depends: libkeyutils1 (>= 1.5.9) but it is not going to be installed Depends: libkrb5support0 (= 1.12.1+dfsg-19) but it is not going to be installed PreDepends: multiarch-support but it is not going to be installed libmagic1 : PreDepends: multiarch-support but it is not going to be installed libpcre3 : PreDepends: multiarch-support but it is not going to be installed libssl1.0.0 : Depends: debconf (>= 0.5) but it is not going to be installed or debconf-2.0 PreDepends: multiarch-support but it is not going to be installed libxml2 : Depends: liblzma5 (>= 5.1.1alpha+20120614) but it is not going to be installed PreDepends: multiarch-support but it is not going to be installed Recommends: xml-core but it is not going to be installed locales : Depends: glibc-2.13-1 Depends: debconf (>= 0.5) but it is not going to be installed or debconf-2.0 php5-common : Depends: sed (>= 4.1.1-1) but it is not going to be installed Depends: psmisc (>= 22.15-1~) but it is not going to be installed Depends: lsof but it is not going to be installed PreDepends: dpkg (>= 1.16.1~) but it is not going to be installed tzdata : Depends: debconf (>= 0.5) but it is not going to be installed or debconf-2.0 ucf : Depends: debconf (>= 1.5.19) but it is not going to be installed Depends: coreutils (>= 5.91) but it is not going to be installed zlib1g : PreDepends: multiarch-support but it is not going to be installedE: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. Any hints? | This is the complete solution for my problem install locales sudo vim /etc/apt/sources.list - deb http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free rpi# Uncomment line below then 'apt-get update' to enable 'apt-get source'#deb-src http://archive.raspbian.org/raspbian/ wheezy main contrib non-free rpideb http://apt.adafruit.com/raspbian/ wheezy main change wheezy to jessie run sudo apt-get update && sudo apt-get install locales revert back to wheezy (change jessie to wheezy ) sudo apt-get updatesudo dpkg-reconfigure locales Now if i run perl i get the following warnings perl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_PAPER = "fr_FR.UTF-8", LC_ADDRESS = "fr_FR.UTF-8", LC_MONETARY = "fr_FR.UTF-8", LC_NUMERIC = "fr_FR.UTF-8", LC_TELEPHONE = "fr_FR.UTF-8", LC_IDENTIFICATION = "fr_FR.UTF-8", LC_MEASUREMENT = "fr_FR.UTF-8", LC_TIME = "fr_FR.UTF-8", LC_NAME = "fr_FR.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C"). Generating fr_FR.UTF-8 , and en_US.UTF-8 sudo nano /etc/locale.gen uncomment those lines : en_US.UTF-8fr_FR.UTF-8 and finally running sudo locale-gen | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128842/"
]
} |
223,534 | In zsh this works fine: alias foo=lsfoo But this does not: alias foo=ls; foo Pressing enter an extra time is not an issue when running interactively. But when running through ssh it suddenly becomes a problem: % ssh zsh@server 'alias foo=ls; foo'zsh:1: command not found: foo Even with a newline it does not work: % ssh zsh@server 'alias foo=ls;foo'zsh:2: command not found: foo The weird thing is that zsh knows it is aliased: % ssh zsh@server 'alias foo=ls; alias'foo=lsrun-help=manwhich-command=whence How do tell zsh that the aliases should be active? | You can not do it. Because aliases were expanded only after history expansion and entire line was read in one go, so when foo was executed, the alias expansion process was gone, it's too late for the shell to recognize new alias. The best way you can do is defining alias in .zshrc or using function like jimmij's answer or using eval : alias foo=ls; eval foo There's a special case with zsh -c . In this case, those aliases which were defined in .zshenv will be expanded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
223,543 | How do I get the previous month end date, based on processing date? Examples: Processing date = 15jan2015 Expected date = 31dec2014, Processing date = 10feb2015 Expected date = 31jan2015 | With GNU date : $ date +%d%b%Y16Aug2015$ date -d "$(date +%Y-%m-01) -1 day" +%d%b%Y31Jul2015 Some shells have built-in support for date manipulation: With ksh93 : $ printf "%(%d%b%Y)T\n" "1st day, yesterday"31Jul2015 With zsh : $ zmodload zsh/datetime$ strftime -s d %Y-%m-01-12 $EPOCHSECONDS$ strftime -rs d %Y-%m-%d-%H $d$ strftime %d%b%Y $((d-86400))31Jul2015 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128856/"
]
} |
223,588 | Unortunately, I am forced to use windows. So, I installed Cygwin to use some Linux commands. The following command works fine. It replaces an image with its trimmed version. "C:\Program Files\Cygwin\bin\convert" image1.png -trim image1.png However, how can I run this command on all image file? "C:\Program Files\Cygwin\bin\convert" * -trim ???? | You installed cygwin, so you can just use its shell for maximum command support : "C:\Program Files\Cygwin\cygwin.bat this will give you a bash shellThen you can change directoty to go to the images location. Suppose your image location is "D:\Your Name\Images" , to go there type cd "/cygdrive/d/Your Name/Images" and then call your command using the bash for loop : for file in *doconvert "$file" -trim "$file"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107909/"
]
} |
223,625 | In new version of Kali 2.0 with updated "aircrack-ng" tool, there is a problem between Internet connection and using cracking tools. I have my wireless card as wlan0 . And the Internet connection is fine too. Then, what I do is: airmon-ng start wlan0airodump-ng wlan0mon Now the airodump-ng doesn't work, saying "Device or resource busy". In order to run this I have to kill some processes, using: airmon-ng stop wlan0airmon-ng check killairmon-ng start wlan0airodump-ng wlan0mon Now the cracking process is up . I can start cracking, but now I cannot connect to the Internet after the kill process. On typing iwconfig in terminal, I get this: root@kali:~#iwconfigwlan0mon IEEE 802.11bgn ESSID:"myessid" Mode:Managed Frequency:2.412 GHz Access Point: 00:00:00:00:00:00 Bit Rate=108 Mb/s Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:9 Missed beacon:0eth0 no wireless extensions.lo no wireless extensions. And, on running ifconfig on terminal, I get this: root@kali:~#ifconfiglo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:5604 errors:0 dropped:0 overruns:0 frame:0 TX packets:5604 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:452904 (442.2 KiB) TX bytes:452904 (442.2 KiB) There is no wlan0 or wlan0mon . I also restarted the network manager by: service network-manager restart But nothing changed. Though it displays wlan0 when checked using ifconfig this time. | I had nearly the same issue. After messing around with airmon-ng I couldn't connect to any networks. Even the network manager icon disappeared from the taskbar (KDE). If I checked iwconfig , I would see eth0, lo, and wlan0mon instead of just wlan0. Doing: ifconfig wlan0 up just told me no such device exists. That clued me into how to potentially fix the mode. Here are the commands that restored my internet access: First, restart your network manager: service NetworkManager restart (Your network manager service might be called Network-Manager) Let's see what your wireless adapter is doing: iwconfig (It might be called something like wlan0mon instead of wlan0 indicating it is in monitor mode still) Since it is still in monitor mode, let's turn the normal mode back on: airmon-ng start wlan0 7 (The last number is the channel and can probably be omitted) Now let's stop the monitoring interface: airmon-ng stop wlan0mon And finally, let's turn your normal network adapter back on: ifconfig wlan0 up Check for the normal adapter now: ifconfig (Should no longer show the "mon" equivalent and instead show wlan0 or whatever your adapter is called in normal mode) Now you can use your network manager app to reconnect to the network for browsing. Not sure why I haven't seen this solution. Most end up rebooting to get back to normal internet mode. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128802/"
]
} |
223,636 | Is it possible to include an attachment with sendmail? I am generating the following emailfile.eml files with the following layout From: Company Name <[email protected]>To: [email protected]: [email protected]: Generated OutputMime-Version: 1.0This will be the body copy even though it's terrible I am sending these emails using # /usr/sbin/sendmail -t < emailfile.eml This part is working file but I would like to include an attachment to this email. | Posting the solution that worked for me in case it can help anyone else, sorry it's so late. The most reliable way I found for doing this was to include the attachment as base64 in the eml file itself, bellow is an example of the eml contents. Note 01 : the base64 for the file comes from running the base64 command on linux using the attachment as an argument (should work with any base64 tool) Note 02 : the string used for the boundary is just nonsense using the date and random upper case letters Filename : emlfile.eml From: Sender <[email protected]>To: [email protected]: [email protected]: [email protected]: Generic SubjectMime-Version: 1.0Content-Type: multipart/mixed; boundary="19032019ABCDE"--19032019ABCDEContent-Type: text/plain; charset="US-ASCII"Content-Transfer-Encoding: 7bitContent-Disposition: inlineGeneric Body Copy--19032019ABCDEContent-Type: application;Content-Transfer-Encoding: base64Content-Disposition: attachment; filename="MyPdfAttachment.pdf"*base64 string goes here (no asterix)*--19032019ABCDE-- Then the filename.eml file can be sent using the command and it will include the attachment # /usr/sbin/sendmail -t < filename.eml | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
223,642 | I'm looking for a small linux (in memory and disk usage) distribution to be used in a network of virtual machines for lab testing. Suggestions or pointers to related information? Addendum Now trying Alpine Linux ... | Posting the solution that worked for me in case it can help anyone else, sorry it's so late. The most reliable way I found for doing this was to include the attachment as base64 in the eml file itself, bellow is an example of the eml contents. Note 01 : the base64 for the file comes from running the base64 command on linux using the attachment as an argument (should work with any base64 tool) Note 02 : the string used for the boundary is just nonsense using the date and random upper case letters Filename : emlfile.eml From: Sender <[email protected]>To: [email protected]: [email protected]: [email protected]: Generic SubjectMime-Version: 1.0Content-Type: multipart/mixed; boundary="19032019ABCDE"--19032019ABCDEContent-Type: text/plain; charset="US-ASCII"Content-Transfer-Encoding: 7bitContent-Disposition: inlineGeneric Body Copy--19032019ABCDEContent-Type: application;Content-Transfer-Encoding: base64Content-Disposition: attachment; filename="MyPdfAttachment.pdf"*base64 string goes here (no asterix)*--19032019ABCDE-- Then the filename.eml file can be sent using the command and it will include the attachment # /usr/sbin/sendmail -t < filename.eml | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128929/"
]
} |
223,670 | File file1.txt contains lines like: /api/purchase/<hash>/index.html For example: /api/purchase/12ab09f46/index.html File file2.csv contains lines like: <hash>,timestamp,ip_address For example: 12ab09f46,20150812235200,22.231.113.64 a77b3ff22,20150812235959,194.66.82.11 I want to filter file2.csv removing all lines where the value of hash is present also in file1.txt. That's to say: cat file1.txt | extract <hash> | sed '/<hash>/d' file2.csv or something like this. It should be straightforward, but I seem unable to make it work. Can anyone please provide a working pipeline for this task? | cut -d / -f 4 file1.txt | paste -sd '|' | xargs -I{} grep -v -E {} file2.csv Explanation: cut -d / -f 4 file1.txt will select the hashes from the first file paste -sd '|' will join all the hashes into a regular expression ex. H1|H2|H3 xargs -I{} grep -v -E {} file2.csv will invoke grep with the previous pattern as an argument, xargs will replace {} with the content of the STDIN If you don't have paste you could replace it with tr "\\n" "|" | sed 's/|$//' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128945/"
]
} |
223,690 | I have a httpd config file from which I would like to delete the a whole block: <Directory "/var/www/html">## Possible values for the Options directive are "None", "All",# or any combination of:# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews## Note that "MultiViews" must be named *explicitly* --- "Options All"# doesn't give it to you.## The Options directive is both complicated and important. Please see# http://httpd.apache.org/docs/2.4/mod/core.html#options# for more information.#Options Indexes FollowSymLinks## AllowOverride controls what directives may be placed in .htaccess files.# It can be "All", "None", or any combination of the keywords:# Options FileInfo AuthConfig Limit#AllowOverride None## Controls who can get stuff from this server.#Require all granted</Directory> from <Directory "/var/www/html"> to the closing </Directory> . I tried: sed '/-<Directory "\/var\/www\/html">/,/-<\/Directory>/d' /etc/httpd/conf/httpd.conf but without success. | Your command does not work because of the dashes before the <Directory ...> (and </Directory> ) statements. This should work: sed '/<Directory "\/var\/www\/html">/,/<\/Directory>/d' /etc/httpd/conf/httpd.conf Also, to make this more readable, you may want to use another character as delimiter than / , for example, # , like so; sed '\#<Directory "/var/www/html">#,\#</Directory>#d' /etc/httpd/conf/httpd.conf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119763/"
]
} |
223,727 | I have a file that looks like the following: chr19 61336212 + 0 0 CG CGT chr19 61336213 - 0 0 CG CGG chr19 61336218 + 0 0 CG CGG chr19 61336219 - 0 0 CG CGC chr19 61336268 + 0 0 CG CGG chr19 61336269 - 0 0 CG CGA chr19 61336402 + 0 0 CG CGG chr19 61336403 - 0 0 CG CGT I want to split this file for every 10000 interval of the 2nd field(NOT lines, but number interval). So for this file I would like to split from the first line( the line with 61336212) to the line that has or up to 61346211 ( 61336212+9999), then from 61346212 to 61356211, and so on and so forth. As you can see the numbers in 2nd field/column is not 'filled'. Is there a way to do this? | awk 'NR==1 {n=$2} { file = sprintf("file.%.4d", ($2-n)/10000) if (file != last_file) { close(last_file) last_file = file } print > file }' Would write to file.0000 , file.0001 ... (the number being int(($2-n)/10000) where n is $2 for the first line). Note that we close files once we've stopped writing to them as otherwise, you'd reach the limit on the number of simultaneously open files after a few hundred files (GNU awk can work around that limit, but then the performances degrade quickly). We're assuming those numbers are always going up. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123923/"
]
} |
223,734 | How to use wget to download files from Onedrive? (and batch files and entire folders, if possible) | Update on APRIL 2021:It looks like this solution NO LONGER WORKS WITH ONEDRIVE FOR BUSINESS. There is one way that works for me (based on How to Make Direct Link of OneDrive Files ) Right-click on the file you are interested in download (from web interface), and choose Embed. Press "Generate HTML code to embed this file" . Copy the part contained in the "" of src is your link. This will look like <https://onedrive.live.com/embed?cid=6EBB03E38A53ED3E&resid=6EBB03E38A53ED3E%21116&authkey=AC4lDqtLG8LqfiA>. Replace embed with download . This will look like https://onedrive.live.com/download?cid=6EBB03E38A53ED3E&resid=6EBB03E38A53ED3E%21116&authkey=AC4lDqtLG8LqfiA . Feed it to wget using following syntax (the quotes are required): wget --no-check-certificate "https://onedrive.live.com/download?cid=6EBB03E38A53ED3E&resid=6EBB03E38A53ED3E%21116&authkey=AC4lDqtLG8LqfiA" Enjoy. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105827/"
]
} |
223,746 | What are the contents of this monolithic code base? I understand processor architecture support, security, and virtualization, but I can't imagine that being more than 600,000 lines or so. What are the historic & current reason drivers are included in the kernel code base? Do those 15+ million lines include every single driver for every piece of hardware ever? If so, that then begs the question, why are drivers embedded in the kernel and not separate packages that are auto-detected and installed from hardware IDs? Is the size of the code base an issue for storage-constrained or memory-constrained devices? It seems it would bloat the kernel size for space-constrained ARM devices if all that was embedded. Are a lot of lines culled by the preprocessor? Call me crazy, but I can't imagine a machine needing that much logic to run what I understand is the roles of a kernel. Is there evidence that the size will be an issue in 50+ years due to it's seemingly ever-growing nature? Including drivers means it will grow as hardware is made. EDIT : For those thinking this is the nature of kernels, after some research I realized it isn't always. A kernel is not required to be this large, as Carnegie Mellon's microkernel Mach was listed as an example 'usually under 10,000 lines of code' | Drivers are maintained in-kernel so when a kernel change requires a global search-and-replace (or search-and-hand-modify) for all users of a function, it gets done by the person making the change. Having your driver updated by people making API changes is a very nice advantage, instead of having to do it yourself when it doesn't compile on a more recent kernel. The alternative (which is what happens for drivers maintained out-of-tree), is that the patch has to get re-synced by its maintainers to keep up with any changes. A quick search turned up a debate over in-tree vs. out-of-tree driver development. The way Linux is maintained is mostly by keeping everything in the mainline repo. Building of small stripped-down kernels is supported by config options to control #ifdef s. So you can absolutely build tiny stripped-down kernels which compile only a tiny part of the code in the whole repo. The extensive use of Linux in embedded systems has led to better support for leaving stuff out than Linux had years earlier when the kernel source tree was smaller. A super-minimal 4.0 kernel is probably smaller than a super-minimal 2.4.0 kernel. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66928/"
]
} |
223,772 | How is it possible to use the GNU find command to match several file types at a time (one search command)? The man page says: -type c File is of type c: b block (buffered) special c character (unbuffered) special d directory p named pipe (FIFO) f regular file l symbolic link; this is never true if the -L option or the -follow option is in effect, unless the symbolic link is broken. If you want to search for symbolic links when -L is in effect, use -xtype. s socket D door (Solaris) and I want to search for files ( f ) and symbolic links ( l ) and pipe it to another process. How can I search for both at the same time? I have tried find -type fl | …find -type f -type l | …find -xtype f -type l | … I know a workaround would be to use a subshell (find -type f; find -type l) | … but I just want to know if it is possible. | You can group and use logical operators with find, but you have to escape the parens so you could look for all files and links like find \( -type f -o -type l \) <other filters> so if you wanted all files and links whose name starts with t you could do find \( -type f -o -type l \) -name 't*' You only need the parens if you want to group things and combine with other opeators, so if you have no other search criteria you can omit them | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31673/"
]
} |
223,778 | How can I run an infinite loop in the background, while continuing on with the script's execution? Example "script": while true; do something_in_the_background; donedo_something_while_the_loop_goes_on_in_the_backgroundfor 1 2 3; do somethingelse; doneexit 0 This (notice the & ) seems to crash the whole system after a short while: while true; do something_in_the_background &donedo_something_while_the_loop_goes_on_in_the_backgroundfor 1 2 3; do somethingelse; doneexit 0 | With the & inside the loop it will start a new process in the background and as fast as it can do it again without waiting for the first process to end. Instead I think you want to put the loop into the background, so put the & on the loop itself like while /bin/true; do something_in_the_backgrounddone &# more stuff | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129013/"
]
} |
223,814 | If I have a number of directories, named, for example 10001 through 10025 is there any reason to use ls 1*/foo vs. ls 100??/foo ? I have a lot more than 25 of them, so I mostly curious if there's any differences in speed. I know the difference in use between the two, that the asterisk will match longer file names, like 10001.backup . But let's say I don't have any files that don't follow my conventions. Is there any behind-the-scenes differences? | Function They mean different things. The asterisk matches zero to infinity characters. The question mark matches exactly one character. From the references above: The * character serves as a "wild card" for filename expansion in globbing. The ? character serves as a single-character "wild card" for filename expansion in globbing… Performance tl;dr: there is no detectable difference in performance. I tested performance by using a directory filled with 36 sub-directories, each named with a single character. There were about 70 000 files in the subdirectories combined. I tested the following. $ time ls ?/* -d >/dev/null$ time ls */* -d >/dev/null I alternated each command ten times each. Here are the results for the real time, in seconds. ? *0.318 0.3260.355 0.2120.291 0.3510.291 0.2650.287 0.2830.362 0.230.248 0.330.286 0.2830.293 0.3510.233 0.352 After statistical analysis (paired t-test, two-tailed), I could detect no difference between the two values in performance (p value = 0.95). EDIT: More samples I repeated the above analysis with 200 samples each, again alternating tests. $ for i in {1..200}; do time (ls */* -d >/dev/null) 2>> /tmp/time_asterisk; time (ls ?/* -d >/dev/null) 2>> /tmp/time_question_mark; done Here are the raw data for ? and * . Again, I could detect no significant difference (p value = 0.55), and the distribution of each test looks more similar. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124762/"
]
} |
223,823 | I know that a filename in Linux has no restriction whatsoever except for two characters '/' and '\0' . I know that '/' is prohibited because it is a directory separator but is there any other reason ? Also on my terminal I can create a file or a directory with the name \0 . So I wonder how to write the null character correctly because obviously it shouldn't allow me to have a filename with null in it mkdir '\0' will create a directory named \0 One more question, If I want to include $ in my filename, I can use the backslash mkdir \$myfile will create a directory named $myfile However, I can do the same if I surround the dollar sign with single quotes and double quotes mkdir \$myfile is the same as mkdir '$'myfile is the same as mkdir "$"myfile is the same as mkdir '$myfile' is the same as mkdir "$myfile" So my question is, Are the single and double quotes a substitution for the escape backslash character? Also what other characters need escaping in bash besides $ , (space) and backslash ? | Printing the null character On many recent shells you can write null character with dollar single quotes format $'\0' , hexadecimal format \x00 , unicode format \u0000 or \U00000000 , or just as you tried with octal: '\0' . The point is that the command has to understand what to do with backslash-escaped characters. For example in case of echo usually one needs to add -e option and in case of printf that would be %b . Let's check if it works: $ echo -ne '\0'$ So produces nothing, just like echo -ne '' , similar $ printf '%b' '\0'$ Let's add some characters around (I will stick with printf '%b' from now on as more robust, but similar effect is with echo -ne ): $ printf '%b' a'\0'bab Only two characters were printed, where did the null go? $ printf '%b' a'\0'b | wc -c3 Let's compare it with a''b : $ printf '%b' a''b | wc -c2 Last more check that we really print null character before trying to create file, let's pass the printed value to the command which will throw the error, like xargs : $ printf '%b' a'\0'b | xargs echoxargs: Warning: a NUL character occurred in the input. It cannot be passed through in the argument list. Did you mean to use the --null option?a Notice how only a was printed at the end. Of course xargs -0 works fine: $ printf '%b' a'\0'b | xargs -0 echoa b Creating the file with null? Now let's try to create file with null character: $ touch $'\0'touch: cannot touch ‘’: No such file or directory$ mkdir $'\0'mkdir: cannot create directory ‘’: No such file or directory# let's try another approach - using printf in command substitution:$ touch "$(printf '%b' '\0')"touch: cannot touch ‘’: No such file or directory$ mkdir "$(printf '%b' '\0')"mkdir: cannot create directory ‘’: No such file or directory The result is exactly the same as in touch '' , it seems null is just ignored all together. What if we skip double quotes around command substitution? $ touch $(printf '%b' '\0')touch: missing file operandTry 'touch --help' for more information.$ mkdir $(printf '%b' '\0')mkdir: missing operandTry 'mkdir --help' for more information. This is the same situation as touch / mkdir without arguments at all. Yet another result is if we surround null with text: $ touch "$(printf '%b' a'\0'b)"$ lsa # in zshab # in bash One can also try to redirect standard output to $'\0' but all one gets is different kind of error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100193/"
]
} |
223,835 | I have a series of commands a,b,c which I am chaining together with &&: a && b && c . I want to catch the output of all the commands to both stdout and stderr. a && b && c 2>&1 > capture_file only captures the output from the c command. | { a && b && c; } >capture_file 2>&1 Note the order of redirections: you have to redirect stdout first. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/223835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125479/"
]
} |
223,882 | I always write stdin redirection after the command, because for me it's more natural to have first the command and then the redirections (if any): some-command < input-file > output-file For years, I've seen people writing the stdin redirection before the command, to have some flow direction: < input-file some-command > output-file (or without spaces after < and > ) Is this ordering accepted by POSIX or just accepted by many shells (in my fedora 21 it is accepted by bash , dash , tcsh , ksh and zsh )? | That behavior was defined by POSIX here : If more than one redirection operator is specified with a command, the order of evaluation is from beginning to end. and here : A "simple command" is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator. This was already the case in the Bourne shell , which POSIX used as a basis. Before a command is executed its input and output may be redirected using a special notation interpreted by the shell. The following may appear anywhere in a simple- command or may precede or follow a command and are not passed on to the invoked command. (…) Unlike the original Bourne shell, POSIX doesn't allow redirection to precede a complex command like while … done , ( … ) , etc. A note that the order of redirection is important, because it control your command behavior and prevent you from some weird result upon failure. Example: command <input >output if command failed to read input (due to permission, non-existed ...) then it will be terminated without create empty file output if you swap the redirection position: command >output <input | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17326/"
]
} |
223,888 | I'm trying to add a system call using linux kernel 4.1.6 but all the documentation I can find is for older versions. Does anyone know how it's done in the newer kernels or have any good references? There's supposed to be 3 steps: Add to the system call table. I've worked out that they now use arch/x86/syscalls/syscall_64.tbl instead of entry.S. So I've put something in there. Add to the asm/unistd.h file. Apparently the unistd.h file is generated automatically now so we don't have to update it manually? So I've done nothing for this step as the file doesn't exist. https://stackoverflow.com/questions/10988759/arch-x86-include-asm-unistd-h-vs-include-asm-generic-unistd-h Compile the syscall into the kernel. I've added the actual system call code to kernel/sys.c as suggested in a book based on kernel 2.6 (The linux kernel development book by Robert Love). I've compiled the kernel again. I then wrote a client program as suggested in the book but it says unknown type name 'helloworld' when I try to compile it. My program is different to the book but the structure is the same. #include <stdio.h>#define __NR_helloworld 323 __syscall0(long, helloworld)int main(){ printf("I will now call helloworld syscall:\n"); helloworld(); return 0;} The Internet (and available books) seem to be seriously lacking of this information - or Google is not as smart as it would like to think. Anyway any help is appreciated. Thanks.~ ~ ~ | According to _syscall(2) man page the _syscall0 macro may be obsolete and requires #include <linux/unistd.h> ; indeed Linux 4.x don't have it However, you might install musl-libc and use its _syscall function. And you could simply use the indirect syscall(2) in your user code. So your testing program would be #define _GNU_SOURCE /* See feature_test_macros(7) */#include <unistd.h>#include <sys/syscall.h> #include <stdio.h>#define __NR_helloworld 323static inline long mysys_helloworld(void) { return syscall(__NR_helloworld,NULL); }int main (int argc, char**argv) { printf("will do the helloworld syscall\n"); if (mysys_helloworld()) perror("helloworld"); return 0;} Above code is untested! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124065/"
]
} |
223,897 | I have a html file. I want to remove all lines that do not start with <tr> . I tried: cat my_file | sed $'s/^[^tr].*//' | sed '/^$/d' but it deleted all the lines. | Try this with GNU sed: sed -n '/^<tr>/p' file or sed '/^<tr>/!d' file | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
223,902 | When I use a pendrive with two partitions on a Windows system, it only recognizes the first partition that I've created in that pendrive. I have a pendrive with two partitions: an ext4 and a ntfs (the one that should be recognized). So, the problem is that when I use this pendrive on Windows it tries to read my ext4 partition since it's the first one that I've created. I'm not sure if just changing the pendrive's name partition from sda2 to sda1 on linux could solve my problem on windows, but that is the only solution that I can think right now. | Try this with GNU sed: sed -n '/^<tr>/p' file or sed '/^<tr>/!d' file | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/223902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
223,924 | I am using offlineimap to fetch mail from several IMAP servers. This used to work but today offlineimap has been unable to fetch mail, producing the following errors: *** Processing account example Establishing connection to imap.gmail.com:993 ERROR: Unknown SSL protocol connecting to host 'imap.gmail.com' for repository '<redacted>'. OpenSSL responded: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)*** Finished account 'example' in 0:00 Relevant parts of my configuration are: [Account example]localrepository = local-exampleremoterepository = remote-example[Repository local-example]type = Maildirlocalfolders = ~/mail/example[Repository remote-example]maxconnections = 1type = Gmailremotehost = imap.gmail.comremoteuser = [email protected] = get_keychain_pass(account="[email protected]", server="imap.gmail.com")ssl = yessslcacertfile = /usr/local/etc/openssl/certs/dummycert.pem The sslcacertfile configuration was created in response to this SO answer . The get_keychain_pass function is from this offlineimap configuation . I am using offlineimap 6.5.7 built with Homebrew on OS X 10.10.4. | The problem started when I installed Homebrew's version of python rather than the Apple version. The error was resolved by running brew uninstall python I discovered this was the solution by reading about a similar error produced by another Python program on OS X. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29028/"
]
} |
223,973 | I'm trying to rewrite an init.d service file that contains the following code: if [ ! -r /var/spool/torque/server_priv/serverdb ]; then DAEMON_SERVER_OPTS="-t create $DAEMON_SERVER_OPTS" fi start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_SERVER_OPTS I have rewritten this as follows: [Service]Type=forkingExecStart=/usr/sbin/pbs_serverPIDFile=/var/spool/torque/server_priv/server.lock But I have no idea how to represent the if in systemd. | As it's already been said, there is deliberately no support for complex logic in systemd . If there is any start-up logic to do (and it isn't part of the daemon itself), it is completely OK to write a small shell script and specify it in ExecStart= . There is one thing to consider, though. The shell script must not do any process management by itself. The shell script must exec the daemon. This is required to avoid interference with systemd's own process watching and management. An example of a wrong shell script: #!/bin/shif [ ! -r /var/spool/torque/server_priv/serverdb ]; then DAEMON_SERVER_OPTS="-t create $DAEMON_SERVER_OPTS"fi$DAEMON -- $DAEMON_SERVER_OPTS This makes the daemon a child of the shell interpreter. If the daemon does not fork and the readiness protocol ( Type= ) is simple , then it's just a redundant process hanging around. Otherwise, if the daemon forks and you set Type=forking , then the whole thing will triple-fork, not double-fork, and systemd will kill the daemon. An example of a correct shell script: #!/bin/shif [ ! -r /var/spool/torque/server_priv/serverdb ]; then DAEMON_SERVER_OPTS="-t create $DAEMON_SERVER_OPTS"fiexec $DAEMON -- $DAEMON_SERVER_OPTS This replaces the shell process with the daemon. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/223973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2671/"
]
} |
224,015 | I want to know which files from /proc directory and which fields of these files I need to calculate the memory usage of a given pid. I've been using the "stat" file and the "vsize" parameter that is in this file but it isn't a good calculation. Anyone knows a better formula for this? Thanks, Ana. | Indeed you need to use /proc/ ; so read carefully proc(5) . For process 1234 you want to read /proc/1234/maps (or /proc/1234/smaps ) to get the address space, and to read /proc/1234/status & /proc/1234/statm For your own process (programmatically) use /proc/self/maps , /proc/self/status , /proc/self/statm Notice that memory usage is a very ambiguous term on Linux. How would you count a file segment mmap -ed by two processes? See mmap(2) & getrusage(2) Try cat /proc/self/maps , cat /proc/$$/maps in a terminal. Read wikipages on address space , virtual memory , page cache , ASLR , ELF , RSS , working set ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129178/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.