source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
288,027 | What are the differences between Debian Linux Kernel and Linux-libre in terms of freedom related issues? I know the Debian Linux Kernel can load non-free modules while in Linux-libre they have been blacklisted. | You've identified pretty much the only difference: the Debian kernel can load firmware, the Linux-libre kernel can't. Both kernels are free software, even as far as the Free Software Foundation is concerned — the FSF considers the Debian GNU/Linux distribution to be free software as long as no repositories are used beyond the main one; the issue they have with Debian is that Debian hosts non-free repositories on the same infrastructure. Philosophically speaking, you could consider the difference to be as follows: the Debian kernel doesn't include any non-free firmware (bugs aside), but it allows users to load non-free firmware if they wish to do so; the Linux-libre kernel doesn't include any non-free firmware or anything looking like firmware, and it prevents users from loading non-free firmware even if they wish to do so. Linux-libre is built by running a deblob script on the kernel source code. This goes through the kernel source code, and makes various firmware-related changes: any firmware for which source code is available is preserved, but the script makes sure the source code is available; any module requiring firmware is stripped of the ability to load the firmware; any source code which looks like firmware (sequences of numbers) is removed; any file containing only firmware ( e.g. the contents of firmware/radeon ) is removed. Some extra work goes into Linux-libre to restore functionality in certain cases; for example, the radeon module is modified so that some r600 -supported cards can still be used, even without firmware. (Look for "Something like this might work on other radeon cards too." in the deblob script.) The Debian distribution includes one firmware package, firmware-linux-free ; this contains only firmware for which source code is available. The non-free repositories also contain a number of firmware packages built from firmware-nonfree , but these aren't part of the main distribution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173831/"
]
} |
288,037 | I'm experiencing a strange behaviour of xmobar right after i enter xmonad . When i xmonad (from .xinitrc , i use XDM) my xmobar appears but it is not either at the top or bottom of the window stack . Once i start an application (e.g. terminal emulator by pressing Mod + Shift + Return ) the application uses the entire screen, as if the xmobar was at the bottom. Then i press Mod + B and nothing happens, once i press Mod + B a second time xmobar is lifted to the top reducing the application window size. After that Mod + B works correctly for the remainder of the xmonad session, i.e. it lowers/raises (hides/shows) the xmobar . I'm confident i misconfigured something. My xmonad.hs looks like: import XMonadimport XMonad.Hooks.SetWMNameimport XMonad.Hooks.DynamicLogmain = do xmonad =<< statusBar "xmobar" myPP toggleStrutsKey defaultConfig { terminal = "urxvt" , focusFollowsMouse = True , clickJustFocuses = False , borderWidth = 1 , modMask = mod4Mask , workspaces = myworkspaces , normalBorderColor = "#dddddd" , focusedBorderColor = "#00dd00" , manageHook = mymanager , startupHook = setWMName "LG3D" }myPP = xmobarPP { ppOutput = putStrLn , ppCurrent = xmobarColor "#336433" "" . wrap "[" "]" --, ppHiddenNoWindows = xmobarColor "grey" "" , ppTitle = xmobarColor "darkgreen" "" . shorten 20 , ppLayout = shorten 6 --, ppVisible = wrap "(" ")" , ppUrgent = xmobarColor "red" "yellow" }toggleStrutsKey XConfig { XMonad.modMask = modMask } = (modMask, xK_b)myworkspaces = [ "code" , "web" , "media" , "irc" , "random" , "mail" , "docs" , "music" , "root" ]mymanager = composeAll [ className =? "gimp" --> doFloat , className =? "vlc" --> doFloat ] Whilst the beginning of my .xmobarrc looks as follows: Config { -- appearance font = "xft:Bitstream Vera Sans Mono:size=9:bold:antialias=true" , bgColor = "black" , fgColor = "#646464" , position = Top , border = BottomB , borderColor = "#646464" -- layout , sepChar = "%" -- delineator between plugin names and straight text , alignSep = "}{" -- separator between left-right alignment , template = "%battery% | %multicpu% | %coretemp% | %memory% | %dynnetwork% | %StdinReader% }{ %date% || %kbd% " -- general behavior , lowerOnStart = False -- send to bottom of window stack on start , hideOnStart = False -- start with window unmapped (hidden) , allDesktops = True -- show on all desktops , overrideRedirect = True -- set the Override Redirect flag (Xlib) , pickBroadest = False -- choose widest display (multi-monitor) , persistent = True -- enable/disable hiding (True = disabled) -- plugins (i do not use any) , commands = [ -- actually several commands are in here ]} I tried several combinations of: , lowerOnStart =, hideOnStart = (True/True, True/False, False/True and False/False as shown now). But the behaviour before i press Mod + B two times does not change. I believe that i have misconfigured xmonad in some way not xmobar but that is just a guess. My .xinitrc might be of help: #!/bin/shif test -d /etc/X11/xinit/xinitrc.dthen # /etc/X11/xinit/xinitrc.d is actually empty for f in /etc/X11/xinit/xinitrc.d/* do [ -x "$f" ] && source "$f" done unset ffi# uk keyboardsetxkbmap gbxrdb .Xresourcesxscreensaver -no-splash &# java behaves badly in non-reparenting window managers (e.g. xmonad)export _JAVA_AWT_WM_NONREPARENTING=1# set the background (again, because qiv uses a different buffer)/usr/bin/feh --bg-scale --no-fehbg -z /usr/share/archlinux/wallpaper/a*.jpg# pulse audio for alsathen /usr/bin/start-pulseaudio-x11fiexec xmonad | Two months later I figured it out. The problem is that statusBar does not register the events of Hooks.manageDocks properly. Once xmonad is running all works well because manageDocks is able to update the Struts on every window event. But in the moment that xmonad is starting the event of making the first windows fullscreen happens before the events from manageDocks . This mages that first open window to ignore the existence of xmobar . manageDocks has its event handler that must be set as the last event handler, therefore statusBar cannot be used. Instead, it is necessary to make xmonad call and configure xmobar manually through dynamicLog , manageHook , layoutHook and handleEventHook . A minimalistic configuration for this would be: main = do xmproc <- spawnPipe "xmobar" xmonad $ defaultConfig { modMask = mod4Mask , manageHook = manageDocks <+> manageHook defaultConfig , layoutHook = avoidStruts $ layoutHook defaultConfig -- this must be in this order, docksEventHook must be last , handleEventHook = handleEventHook defaultConfig <+> docksEventHook , logHook = dynamicLogWithPP xmobarPP { ppOutput = hPutStrLn xmproc , ppTitle = xmobarColor "darkgreen" "" . shorten 20 , ppHiddenNoWindows = xmobarColor "grey" "" } , startupHook = setWMName "LG3D" } `additionalKeys` [ ((mod4Mask, xK_b), sendMessage ToggleStruts) ] This makes all events to be processed by docsEventHook and ensures that layout changes made by docsEventHook are the last ones applied. Now lowerOnStart = False (or True ) works as expected in all cases within xmobarrc . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/288037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172635/"
]
} |
288,059 | This event actually took place a few years ago, but I still have the unchanged USB flash drive in my possession. I may be out of luck, but I thought I would ask all you smart people here for your suggestions. Short Story: A few years back, my wife wanted to store all of her photos from her iPhone onto a USB flash drive because she was running out of storage. We picked up a brand new USB flash drive from the store, so I assume it had a FAT32 file system. We plugged the flash drive into a Mac OS X and were able to backup all of her photos. We realized after the backup had complete that almost every photo had a duplicate file. photo.jpg had a duplicate file called photo\ 1.jpg . All of the duplicate files ended with the \ 1.jpg suffix. Just having started UNIX, I knew that I could use the shell's simple regex to remove all of the duplicate files, but I ended up not putting my command in quotes... And I ended up executing the following: rm * 1.jpg . As you can see, I told the system to remove every single file and then remove 1.jpg . Instead of telling the system to remove every file that ended in 1.jpg . After this occurred, with my furious wife (at the time girlfriend) next to me, I unplugged the flash drive and stored it in a drawer. Question: Are there any secure UNIX tools to recover data, that was removed with rm , from a USB flash drive? Or am I out of luck? As I stated above, I have not touched the flash drive since the event occurred. If this question is far too broad, feel free to move it to meta or wherever it best fits. | Are there any secure UNIX tools to recover data, that was removed with rm , from a USB flash drive? Yes and, by the way, recovery of photos is one of the most common scenarios. The conditions you described are actually optimal because: you directly deleted the files the file system is not damaged you did not use the drive anymore These conditions lead to two available options. If you care about the file names (or have fragmented files) When you write a lot of pictures sequentially on a drive, the risk of fragmentation is actually very low, but still. To recover files and file names you need a tool which is file-system aware. Enter TestDisk : sudo testdisk /dev/sdb It will show you a step-by-step procedure through a TUI (textual user interface). The essential steps are: scanning the drive selecting the partition pressing P to show the files copying the deleted (red) files with C If you actually just want the photos back For pictures, you might as well not care about the names. Moreover, the file system might be damaged (not your case) and TestDisk would not help. PhotoRec (from the same developer) comes to the rescue: sudo photorec /dev/sdb Here you just need to specify the output directory. You can also disable detection for some file types which you don't care about. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125535/"
]
} |
288,105 | When I execute less package.rpm , less shows me all sorts of meta info about the package. What is less exactly doing - does it have built in code to be able to extract meta info, or is an rpm structured in a way that the first part just looks like a text file? I would assume the former, since head is not so helpful here. But to get to the real question: If I would like to grep through this meta data less showing me, how can I accomplish this? | If you browse through the less man page, you'll notice less has an INPUT PREPROCESSOR feature. echo $LESSOPEN to view the location of this preprocessor, and use less / vim / cat to view its contents. On my machine this preprocessor is /usr/bin/lesspipe.sh and it includes the following for rpms: *.rpm) rpm -qpivl --changelog -- "$1"; handle_exit_status $? In effect, less hands off openning the file to rpm , and shows you the pagination of its output. Obviously, to grep through this info, simply grep the output of rpm directly: grep "foo" < <(rpm -qpivl --changelog -- bar.rpm) Or in general (thanks OrangeDog) grep "foo" < <(lesspipe.sh bar.rpm) Note: $LESSOPEN Does not simply hold the location of lesspipe.sh - it begins with a | and ends with a %s so invoking it directly would result in errors. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/288105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49331/"
]
} |
288,117 | I have two files : f1: f2:============== ===============some text line 1 A1some text line 2 A2some text line 3 A3 can I quickly merge these two files to produce f3: some text line 1A1some text line 2A2some text line 3 A3 | It's a job for paste : paste -d'\n' f1.txt f2.txt Example: $ cat foo.txt some text line 1some text line 2some text line 3$ cat bar.txt A1A2A3$ paste -d'\n' foo.txt bar.txt some text line 1A1some text line 2A2some text line 3A3 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18290/"
]
} |
288,151 | How can I include spaces as part of a variable used in an svn command for RHEL bash scripting? Or if there's something else wrong with the following, please advise. The SVN URL variable has no spaces, and this section is working: svn checkout $svnUrl $checkoutDir --username $username --password $password --non-interactive --depth immediates --no-auth-cache But the SVN update command that works when hard coded is not working as a variable: updateSubProject="&& svn update --set-depth infinity --no-auth-cache --username $username --password $password --non-interactive"cd project-dir $updateSubProjectcd ../another-project $updateSubProject | Better would be to make a function to do it like updateSubProject() { pushd "$1" svn checkout "$svnUrl" "$checkoutDir" --username "$username" --password "$password" --non-interactive --depth immediates --no-auth-cache popd}updateSubProject project-dirupdateSubProject path/to/another-project this way you aren't trying to store code in a variable, and you'll avoid a lot of the word splitting issues. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138713/"
]
} |
288,236 | At my organization, we have a number of queue consuming worker processes. We're currently using SupervisorD to manage them, but would like to use SystemD if possible for certain advantages therein. I'm fairly experienced with writing custom units, but I don't immediately have an analog in SystemD land for this. In the SupervisorD documentation a parameter called numprocs is detailed which allows one to set the number of processes they'd like to be started with the service. If I want 30 processes started, it's a one-line change. Is there a setting in SystemD units that will allow me to specify how many of these processes I'd like started? | What Munir mentioned is exactly how you do this. Basically you create a service file, and start it 30 times. Now that may seem a little unweildy, but it has advantages, such as being able to shut one of them down if it's misbehaving, and not have to shut all of them down. There are also some things you can do to make management easier. First, the unit file. Create a file, such as /etc/systemd/system/[email protected] . The important bit is the @ symbol. It's contents might look like: [Service]ExecStart=/bin/sleep 600 %I[Install]WantedBy=multi-user.target Then start it with systemctl start [email protected] , systemctl start [email protected] . The processes that get launched will look like: root 17222 19 0 0.0 0.0 Ss 00:05 /bin/sleep 600 1root 17233 19 0 0.0 0.0 Ss 00:02 /bin/sleep 600 2 Notice that the %I got substituted with whatever you put after the @ when you started it. You can start all 30 with a little shell-fu: systemctl start test@{1..30}.service You can also enable them at boot like any normal service: systemctl enable [email protected] . Now, what I meant by things you can do to make management easier: Maybe you don't want to have to use test@{1..30}.service for managing them all. It is a little unwieldy. You can instead create a new target for your service. Create /etc/systemd/system/test.target with: [Install]WantedBy=multi-user.target Then adjust the /etc/systemd/system/[email protected] so that it looks like: [Unit]StopWhenUnneeded=true[Service]ExecStart=/bin/sleep 600 %I[Install]WantedBy=test.target Reload systemd with systemctl daemon-reload (only necessary if you are modifying the unit file, and didn't skip the earlier version of it). And now enable all the services you want to be managed by doing systemctl enable test@{1..30}.service . (If you had previously enabled the service while it had WantedBy=multi-user.target , disable it first to clear out the dependency) You can now do systemctl start test.target and systemctl stop test.target , and it will start/stop all 30 processes. And again, you can enable at boot like any other unit file: systemctl enable test.target . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
288,333 | Create the following files in a directory. $ touch .a .b a b A B 你好嗎 My default ls order ignores the presence of leading dots, intermingling them with the other files. $ ls -Altotal 0-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 a-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 .a-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 A-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 b-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 .b-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 B-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:06 你好嗎 I can change LC_COLLATE to put the dotfiles first. $ LC_COLLATE=C ls -Altotal 0-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 .a-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 .b-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 A-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 B-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 a-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:03 b-rw-r--r-- 1 sparhawk sparhawk 0 Jun 8 17:06 你好嗎 Unfortunately this makes the sort order case-sensitive, i.e. A and B precede a and b . Is there a way to print dotfiles first while staying case-insensitive ( A and a precede B and b )? Edit: attempting to modify LC_COLLATE None of the answers so far fully replicate the functionality of ls easily. Conceivably, I could wrap some of them in a function, but this would have to include some detailed code on (e.g.) how to work with no argument vs. supplying a directory as an argument. Or how to deal with an explicit -d flag. Alternatively, I thought that maybe there could be a better LC_COLLATE to use. However, I can't seem to make that work. I'm currently using LC_COLLATE="en_AU.UTF-8" . I checked /usr/share/i18n/locales/en_AU (although I'm not sure if this is the right file, as I can't see any reference to UTF-8 ); I found the following. LC_COLLATEcopy "iso14651_t1"END LC_COLLATE /usr/share/i18n/locales/iso14651_t1 contains copy "iso14651_t1_common" . Finally, /usr/share/i18n/locales/iso14651_t1_common contains <U002E> IGNORE;IGNORE;IGNORE;<U002E> # 47 . I deleted this line, ran sudo locale-gen , and restarted my computer. Unfortunately, this changed nothing. | OP was very close with editing /usr/share/i18n/locales/iso14651_t1_common , but the trick is not to delete the line <U002E> IGNORE;IGNORE;IGNORE;<U002E> # 47 . but rather to modify it to <U002E> <RES-1>;IGNORE;IGNORE;<U002E> # 47 . Why this works The IGNORE statements specify that the full stop (aka period, or character <U002E> ) will be ignored when ordering words alphabetically. To make your dotfiles come first, change IGNORE to a collating symbol that comes before all other characters. Collating symbols are defined by lines like collating-symbol <something-inside-angle-brackets> and they are ordered by the appearance of the line <something-inside-angle-brackets> In my copy of iso14651_t1_common , the first-place collating symbol is <RES-1> , which appears on line 3458. If you file is different, use whichever collating symbol is ordered first. Details about character ordering with LC_COLLATE <U002E> has three IGNORE statements because letters can be compared multiple times in case of ties. To understand this, consider lowercase a and uppercase A (which are part of a group of characters that actually get compared four times): <U0061> <a>;<BAS>;<MIN>;IGNORE # 198 a<U0041> <a>;<BAS>;<CAP>;IGNORE # 517 A Having multiple rounds of comparison allow files that start with "a" and "A" to be grouped together because both are compared as <a> during the first pass, with the next letter determining the ordering. If all of the following letters are the same (e.g. a.txt and A.txt ), the third pass will put a.txt first because the collating symbol for lowercase letters <MIN> appears on line 3467, before the collating symbol for uppercase letters <CAP> (line 3488). Implementing this change If you want the period to come first every time a program orders letters using LC_COLLATE , you can modify iso14651_t1_common as described above and rebuild your locations file. But if you want to make this change only to ls and without root access, you can copy the original locale files to another directory before modifying them. What I did My default locale is en_US, so I copied en_US , iso14651_t1 , and iso14651_t1_common to $HOME/path/to/new/locales . There I made the abovementioned change to iso14651_t1_common and renamed en_US to en_DOTFILE . Next I compiled the en_DOTFILE locale with localedef -i en_DOTFILE -f UTF-8 -vc $HOME/path/to/new/locales/en_DOTFILE.UTF-8 To replace the default ls ordering, make a BASH script called ls : #!/bin/bashLOCPATH=$HOME/path/to/new/locales LANG=en_DOTFILE.UTF-8 ls "$@" save it somewhere that appears before /usr/bin on your path, and make it executable with chmod +x ls . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
288,344 | When I added Debian 8 to my KVM management tool , I found that I could not access the console unless I added console=ttyS0 to the grub boot configuration . It wasn't great but it worked. I am in the process of adding Ubuntu 16.04 to the management tool, and this time when the guest is installed it has the same problem, but I can no longer see the grub menu options when I reboot the instance whilst connected to the console. Therefore, I cannot implement the workaround. I managed to find the IP address of the guest instance by running arp -an on the hypervisor and connecting to the IPs on the KVM bridge until I found the right one. This allowed me to confirm that the guest was installed and running correctly. I would like to be able to connect to the console using sudo virsh console [guest ID] in case something goes wrong with the networking or if openssh suddenly decides to stop working. What do I need to do to be able to connect to the guest ubuntu 16.04 console from the hypervisor? My gut feeling is that I should just need to tweak the configuration settings which are accessed by sudo virsh edit [guestID] . At the moment I have: ...<serial type='pty'> <target port='0'/></serial><console type='pty'> <target type='serial' port='0'/></console>... Extra Info Ubuntu 14.04 KVM hypervisor using kernel 4.2.0-36-generic Virsh 1.2.2 | Update 13th March 2017 For those already in the situation described above, you can fix your existing guest using the original answer below. However, for those of you who would rather never have to go through this pain again, you can just add the following so the %post section of your kickstart file: %post --nochroot( sed -i "s;quiet;quiet console=ttyS0;" /target/etc/default/grub sed -i "s;quiet;quiet console=ttyS0;g" /target/boot/grub/grub.cfg) 1> /target/root/post_install.log 2>&1%end This will ensure that the necessary changes to grub are made as described below, so that new guests you deploy through using the kickstart file won't suffer from this problem. Original Answer For those who manage to connect via SSH after finding out the IP using arp -an on the host, you can perform the following steps ( taken from the bottom of this page ) once you are connected to the guest . Edit the grub configuration file: sudo vim /etc/default/grub Add the text console=ttyS0 to the GRUB_CMDLINE_LINUX_DEFAULT parameter as shown below: Then have the grub menu be rebuilt using your change by executing: sudo update-grub Now you should be able to connect to a working console with virsh console [guest ID] . This will keep working as future kernels are added to the system, but I would much rather have a solution that didn't require me to have SSH access to the guest in the first place . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64440/"
]
} |
288,395 | I am writing a script that will first check if Node is currently installed, if not it will install the latest version of Node. If it is installed, then it will proceed to another function to update it. My current script: #!/bin/bashfunction isNodeInstalled() { clear echo "Checking if Node is installed ..." if command --version node &>/dev/null; then echo "Installing Node ..." curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash - sudo apt-get install nodejs -y echo "Node has been installed." sleep 5 updateNode else echo "Node has already been installed." sleep 5 updateNode fi} | if which node > /dev/null then echo "node is installed, skipping..." else # add deb.nodesource repo commands # install node fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173825/"
]
} |
288,409 | if I want to count the lines of code, the trivial thing is cat *.c *.h | wc -l But what if I have several subdirectories? | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/288409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9115/"
]
} |
288,427 | I have a text file that looks like this: { "mimeType": "web", "body": "adsfdf", "data_source_name": "abc", "format": "web", "url": "http://google.com/", "urls": "http://google.com/", "lastModified": "123123", "title": "Google", "docdatetime_dt": "1231234", "wfbdomain": "google.com", "id": "http://google.com", }, { "mimeType": "web", "body": "adsfdf", "data_source_name": "zdf", "format": "web", "url": "http://facebook.com/", "urls": "http://facebook.com/", "lastModified": "123123", "title": "Facebook", "docdatetime_dt": "1231234", "wfbdomain": "facebook.com", "id": "http://facebook.com", }, { "mimeType": "web", "body": "adsfdf", "format": "web", "url": "http://twitter.com/", "urls": "http://twitter.com/", "lastModified": "123123", "title": "Twitter", "docdatetime_dt": "1231234", "wfbdomain": "twitter.com", "id": "http://twitter.com", } If you see the third one in the above block, you will notice that "data_source_name": .... is missing. I have a file that is really huge and want to check if this particular thing is missing, and if missing, print/echo it. I tried sed but am unable to figure out how to use it properly. Is it possible using sed or something else? | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/288427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
288,428 | I'd like to learn why traceroute sends three packets per hop by default. (Nothing important, I'm just curious). Edit: packages != packets | The easiest way is to use the tool called cloc . Use it this way: cloc . That's it. :-) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/288428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157212/"
]
} |
288,437 | I want to run my node app in background and curl simultaneously using && .I tried following but not work node app.js &;curl localhost and i tried another one node app.js & && curl localhost but both not working | You may want echo first && echo second && echo third & wait which gives you the output (similar to) [1] 4242firstsecondthird [1]+ Done The last & puts the whole previous command in a pipeline/job. That job consists of three commands chained together using the shell boolean expression. If one of those returns false, that should terminate the chain. But they will all run in the background. The problem with running the first program in the background but the second one in the foreground is that the second command does not know if the first completed successfully. When you put a program in the background, its status will be 0 unless the program could not be executed to begin with. So the following really does not make sense: ( start_webservice & ) && curl localhost Neither does this make sense: start_webservice & test "$?" = 0 && curl localhost Simply start the background service and unconditionally test it. More than likely, you will want to wait for a little while before making that test: start_webservice & success=0 tries=5 pause=1 while [ $success = 0 -a $tries -gt 0 ]; do sleep $pause let tries=tries-1 curl localhost && success=1 done if [ $success = 0 ]; then echo "Launch failed" fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172333/"
]
} |
288,512 | I have a bash script that needs to behave differently if a particular alias is defined. Is there a way to test if a particular command is an alias in bash? | If alias is passed an alias name without =value , it just prints the alias definition if that alias is defined, or fails with an error if there's no such alias. So you can just do: if alias your_alias_name >/dev/null 2>&1; then do_somethingelse do_another_thing; fi (replace your_alias_name as required) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8474/"
]
} |
288,517 | I did with a password and with the following fieldsas root openssl req -x509 -days 365 -newkey rsa:2048 -keyout /etc/ssl/apache.key \ -out /etc/ssl/apache.crt Fields Country: FIState: PirkanmaaLocality: TampereOrganization: masiOrganizational Unit Name: SSL Certificate TestCommonName: 192.168.1.107/owncloudEmailAddress: [email protected] Output: SSL handshake error in HTTPS. Expected output: HTTPS connection. HTTP works. The CommonName should include the URL where you want to go, owncloud's thread here . I have tried unsuccessfully in commonname 192.168.1.107/owncloud 192.168.1.107/ Test OS for server: Debian 8.5. Server: Raspberry Pi 3b. Owncloud-server: 8.2.5. Owncloud-client: 2.1.1. Systems-client: Debian 8.5. | openssl req -x509 -days 365 -newkey rsa:2048 -keyout /etc/ssl/apache.key -out /etc/ssl/apache.crt You can't use this command to generate a well formed X.509 certificate. It will be malformed because the hostname is placed in the Common Name (CN) . Placing a hostname or IP Address in the CN is deprecated by both the IETF (most tools, like wget and curl ) and CA/B Forums (CA's and Browsers). According to both the IETF and CA/B Forums, Server names and IP Addresses always go in the Subject Alternate Name (SAN) . For the rules, see RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile and CA/Browser Forum Baseline Requirements . You mostly need to use an OpenSSL configuration file and tailor it to suit your needs. Below is an example of one I use. It's called example-com.conf , and it's passed to the OpenSSL command via -config example-com.conf . Also note well : all machines claim to be localhost , localhost.localdomain , etc. Be careful about issuing certificates for localhost . I'm not saying don't do it; just understand there are some risks involved. Alternatives to localhost are: (1) run DNS and issue certificates to the machine's DNS name. Or, (2) use static IP and include the static IP address. The Browsers will still give you warnings about a self signed certificate that does not chain back to a trusted root. Tools like curl and wget will not complain, but you still need to trust you self signed with an option like cURL's --cafile . To overcome the Browser trust issue, you have to become your own CA. "Becoming your own CA" is known as running a Private PKI. There's not much to it. You can do everything a Public CA can do. The only thing different is you will need to install your Root CA Certificate in the various stores. It's no different than, say, using cURL's cacerts.pm . cacerts.pm is just a collection of Root CA's, and now you have joined the club. If you become your own CA, then be sure to burn your Root CA private key to disc and keep it offline. Then pop it in your CD/DVD drive when you need to sign a signing request. Now you are issuing certificates just like a Public CA. None of this is terribly difficult once you sign one or two signing requests. I've been running a Private PKI for years at the house. All my devices and gadgets trust my CA. For more information on becoming your own CA, see How do you sign Certificate Signing Request with your Certification Authority and How to create a self-signed certificate with openssl? . From the comments in the configuration file below... Self Signed (note the addition of -x509) openssl req -config example-com.conf -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout example-com.key.pem -days 365 -out example-com.cert.pem Signing Request (note the lack of -x509) openssl req -config example-com.conf -new -newkey rsa:2048 -nodes -keyout example-com.key.pem -days 365 -out example-com.req.pem Print a Self Signed openssl x509 -in example-com.cert.pem -text -noout Print a Signing Request openssl req -in example-com.req.pem -text -noout Configuration File # Self Signed (note the addition of -x509):# openssl req -config example-com.conf -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout example-com.key.pem -days 365 -out example-com.cert.pem# Signing Request (note the lack of -x509):# openssl req -config example-com.conf -new -newkey rsa:2048 -nodes -keyout example-com.key.pem -days 365 -out example-com.req.pem# Print it:# openssl x509 -in example-com.cert.pem -text -noout# openssl req -in example-com.req.pem -text -noout[ req ]default_bits = 2048default_keyfile = server-key.pemdistinguished_name = subjectreq_extensions = req_extx509_extensions = x509_extstring_mask = utf8only# The Subject DN can be formed using X501 or RFC 4514 (see RFC 4519 for a description).# It's sort of a mashup. For example, RFC 4514 does not provide emailAddress.[ subject ]countryName = Country Name (2 letter code)countryName_default = USstateOrProvinceName = State or Province Name (full name)stateOrProvinceName_default = NYlocalityName = Locality Name (eg, city)localityName_default = New YorkorganizationName = Organization Name (eg, company)organizationName_default = Example, LLC# Use a friendly name here because it's presented to the user. The server's DNS# names are placed in Subject Alternate Names. Plus, DNS names here is deprecated# by both IETF and CA/Browser Forums. If you place a DNS name here, then you # must include the DNS name in the SAN too (otherwise, Chrome and others that# strictly follow the CA/Browser Baseline Requirements will fail).commonName = Common Name (e.g. server FQDN or YOUR name)commonName_default = Example CompanyemailAddress = Email AddressemailAddress_default = [email protected]# Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ...[ x509_ext ]subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid,issuer# If RSA Key Transport bothers you, then remove keyEncipherment. TLS 1.3 is removing RSA# Key Transport in favor of exchanges with Forward Secrecy, like DHE and ECDHE.basicConstraints = CA:FALSEkeyUsage = digitalSignature, keyEnciphermentsubjectAltName = @alternate_namesnsComment = "OpenSSL Generated Certificate"# RFC 5280, Section 4.2.1.12 makes EKU optional# CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused# extendedKeyUsage = serverAuth, clientAuth# Section req_ext is used when generating a certificate signing request. I.e., openssl req ...[ req_ext ]subjectKeyIdentifier = hashbasicConstraints = CA:FALSEkeyUsage = digitalSignature, keyEnciphermentsubjectAltName = @alternate_namesnsComment = "OpenSSL Generated Certificate"# RFC 5280, Section 4.2.1.12 makes EKU optional# CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused# extendedKeyUsage = serverAuth, clientAuth[ alternate_names ]DNS.1 = example.comDNS.2 = www.example.comDNS.3 = mail.example.comDNS.4 = ftp.example.com# Add these if you need them. But usually you don't want them or# need them in production. You may need them for development.# DNS.5 = localhost# DNS.6 = localhost.localdomain# DNS.7 = 127.0.0.1# IPv6 localhost# DNS.8 = ::1# DNS.9 = fe80::1 You may need to do the following for Chrome. Otherwise Chrome may complain a Common Name is invalid ( ERR_CERT_COMMON_NAME_INVALID ) . I'm not sure what the relationship is between an IP address in the SAN and a CN in this instance. # IPv4 localhost# IP.1 = 127.0.0.1# IPv6 localhost# IP.2 = ::1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
288,521 | If I use cat -n text.txt to automatically number the lines, how do I then use the command to show only certain numbered lines. | Use sed Usage $ cat fileLine 1Line 2Line 3Line 4Line 5Line 6Line 7Line 8Line 9Line 10 To print one line (5) $ sed -n 5p fileLine 5 To print multiple lines (5 & 8) $ sed -n -e 5p -e 8p fileLine 5Line 8 To print specific range (5 - 8) $ sed -n 5,8p fileLine 5Line 6Line 7Line 8 To print range with other specific line (5 - 8 & 10) $ sed -n -e 5,8p -e 10p fileLine 5Line 6Line 7Line 8Line 10 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/288521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174168/"
]
} |
288,551 | I just ran a job that takes several hours, and I forgot to pipe that text into a text file. pseudocode: echo [previous text output] > OutputHistory.txt Additionally, I can't just "copy and paste" what was in my terminal because 1) the display omits important formatting characters like "\t", and 2, I may have closed the terminal window. Is this possible with any Unix commands? | This is impossible in general. Once an application has emitted some output, the only place where this output is stored is in the memory of the terminal. To give an extreme example, if this is the 1970s the terminal is a hardcopy printer, the output isn't getting back into the computer without somebody typing it in. If the output is still in the scrolling buffer of your terminal emulator, you may be able to get it back, but how to do so depends on the terminal, there's no standard way. Tabs may or may not have been converted into spaces at that point. Whether formatting information (colors, bold, etc.) can be retrieved and in what format depends on the terminal. With most terminals, there's no easy or foolproof way to find where the command's output started and ended. If you plan in advance, you can record the command's output transparently with script . Running script mycommand.log mycommand may be different from mycommand 2>&1 | tee mycommand.log because with script , the command is still writing to a terminal. A compromise is to always run long-lived commands inside screen or tmux . Both have a way to dump the scrollback buffer to a file, and they have the added bonus that you can disconnect from the session without disrupting the program's execution and reconnect afterwards. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167129/"
]
} |
288,589 | Since google chrome/chromium spawn multiple processes it's harder to see how much total memory these processes use in total. Is there an easy way to see how much total memory a series of connected processes is using? | Given that google killed chrome://memory in March 2016, I am now using smem : # detailed output, in kB apparentlysmem -t -P chrom# just the total PSS, with automatic unit:smem -t -k -c pss -P chrom | tail -n 1 to be more accurate replace chrom by full path e.g. /opt/google/chrome or /usr/lib64/chromium-browser this works the same for multiprocess firefox (e10s) with -P firefox be careful, smem reports itself in the output, an additional ~10-20M on my system. unlike top it needs root access to accurately monitor root processes -- use sudo smem for that. see this SO answer for more details on why smem is a good tool and how to read the output. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/288589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123297/"
]
} |
288,643 | I know that ${var} and $var does the same thing. I also know that you can use brace for advanced options like ${var:=word} or ${var#pattern} . It's also explained here . But I see in some bash scripts the use of braces in the simplest form : ${var} and sometimes I see only this notation $var . Is there any reason to prefer this syntax over the simple $var as it does the same thing ? Is it for portability ? Is it a coding style ? As a conscientious developer, which syntax I should use ? | It is for the sake of clarity. As pointed out by arzyfex, there is a difference between ${foo}_bar and $foo_bar . Always use braces, and you will never make that mistake. Also note that you need the braces if you wish to refer to positional parameters with more than one digit, e.g. ${11} . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126736/"
]
} |
288,724 | In https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/#platform-linux ssh-keygen -t rsa -b 4096 -C "[email protected]"# Creates a new ssh key, using the provided email as a label Generating public/private rsa key pair. Is [email protected] an argument to the the option -C ? What does "label" mean?can it be any string, not necessarily my email account registered with github? Thanks. | Yes, [email protected] is the argument for -C , which allows you to specify the comment attached to the generated key. The comment is simply text appended to the key in your public key file, and is typically used as a label for your key ( e.g. on GitHub which is what you seem interested in). The default comment is your username @ the hostname of the system you generate your key on, but it can be any string you wish. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
288,731 | I have a Chromebook on which I installed Arch Linux. However, this Chromebook comes with a very odd key: a "Power On/Off" key at the top right of the keyboard. Without ANY warning, this button turns off the computer. Naturally, I have been pressing this key when looking for backspace or when my finger slipped while pressing surrounding buttons. As a consequence, I have turned off my computer at very impractical moments. This has to stop. How can I disable or remap this key? | I found your solution on the Arch wiki : Out of the box, systemd-logind will catch power key and lid switch events and handle them: it will shut down the Chromebook on a power key press, and a suspend on a lid close. However, this policy might be a bit harsh given that the power key is an ordinary key at the top right of the keyboard that might be pressed accidentally. To configure logind to ignore power key presses and lid switches, add the lines to logind.conf below. /etc/systemd/logind.conf HandlePowerKey=ignoreHandleLidSwitch=ignore Then restart logind for the changes to take effect. It looks like you just need to add HandlePowerKey=ignore to /etc/systemd/logind.conf . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
288,751 | I want to install my own service by dropping a .service file in /etc/systemd/system . My source .service file is in /opt/something.service . I have two choices when installing: cp /opt/something.service /etc/systemd/system ln -s /opt/something.service /etc/systemd/system Both approaches work when I start and enable the service (the service works correctly). There is however a difference when disabling the service: in the case of the copy, /etc/systemd/system/something.service remains in the case of the link, /etc/systemd/system/something.service is removed Is this by design? This is quite annoying because after disabling the service created via a link, it is not enough to enable it - the service unit must be recreated too. | Yes, this is by design. The man page for systemctl disable says: Disables one or more units. This removes all symlinks to the specified unit files from the unit configuration directory, and hence undoes the changes made by enable. Note however that this removes all symlinks to the unit files (i.e. including manual additions), not just those actually created by enable. Here's the link for it: https://www.freedesktop.org/software/systemd/man/systemctl.html# It does not explain why but I can hazard a guess that it cannot differentiate between links created using systemctl enable and the ones created manually since it is looking for links that point to the unit file. You should use the link option in systemctl when you create a symlink to a source file outside the systemd search path.Also from the same man page. link FILENAME... Link a unit file that is not in the unit file search paths into the unit file search path. This requires an absolute path to a unit file. The effect of this can be undone with disable. The effect of this command is that a unit file is available for start and other commands although it is not installed directly in the unit search path. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64753/"
]
} |
288,754 | I would like to know which files in my system I access most often, as a gauge of how important they are. I know the OS records the last time the file was accessed. Is there someone way to log in a text file each time the file is accessed, possibly on an hourly or daily timescale? I suspect I could use a chron task but I am not very familiar with using it. I would prefer an OS X solution. | Yes, this is by design. The man page for systemctl disable says: Disables one or more units. This removes all symlinks to the specified unit files from the unit configuration directory, and hence undoes the changes made by enable. Note however that this removes all symlinks to the unit files (i.e. including manual additions), not just those actually created by enable. Here's the link for it: https://www.freedesktop.org/software/systemd/man/systemctl.html# It does not explain why but I can hazard a guess that it cannot differentiate between links created using systemctl enable and the ones created manually since it is looking for links that point to the unit file. You should use the link option in systemctl when you create a symlink to a source file outside the systemd search path.Also from the same man page. link FILENAME... Link a unit file that is not in the unit file search paths into the unit file search path. This requires an absolute path to a unit file. The effect of this can be undone with disable. The effect of this command is that a unit file is available for start and other commands although it is not installed directly in the unit search path. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174308/"
]
} |
288,774 | Why do some GNU Coreutils commands have the -T/--no-target-directory option? It seems like everything that it does can be achieved using the semantics of the . (self dot) in a traditional Unix directory hierarchy. Considering: cp -rT /this/source dir The -T option prevents the copy from creating a dir/source subdirectory. Rather /this/source is identified with dir and the contents are mapped between the trees accordingly. So for instance /this/source/foo.c goes to dir/foo.c and so on, rather than to dir/source/foo.c . But this can be easily accomplished without the -T option using: cp -r /this/source/. dir # Probably worked fine since dawn of Unix? Semantically, the trailing dot component is copied as a child of dir , but of course that "child" already exists (so doesn't have to be created) and is actually dir itself, so the effect is that /this/path is identified with dir . It works fine if the current directory is the target: cp -r /this/tree/node/. . # node's children go to current dir Is there something you can do only with -T that can rationalize its existence? (Besides support for operating systems that don't implement the dot directory, a rationale not mentioned in the documentation.) Does the above dot trick not solve the same race conditions that are mentioned in the GNU Info documentation about -T ? | Your . trick can only be used when you're copying a directory, not a file. The -T option works with both directories and files. If you do: cp srcfile destfile and there's already a directory named destfile it will copy to destfile/srcfile , which may not be intended. So you use cp -T srcfile destfile and you correctly get the error: cp: cannot overwrite directory `destfile' with non-directory If you tried using the . method, the copy would never work: cp: cannot stat `srcfile/.`: Not a directory | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/288774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16369/"
]
} |
288,782 | I need to encrypt a large file using gpg . Is it possible to show a progress bar like when using the pv command? | progress can do this for you — not quite a progress bar, but it will show progress (as a percentage) and the current file being processed (when multiple files are processed): gpg ... &progress -mp $! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42478/"
]
} |
288,783 | If I had a four gigabyte disk image, and I copied the first, say, two gigs, into a file at one path, and the remaining two gigs into a file at a different path, could I mount it as one disk image on one mount point despite half being in one file and the other half being in another file? Or if that's not possible if I had one disk image with one, say, ext4 partition and another with the same partition table and ext4 partition, would I be able to mount them on the same mount point? Methods that require FUSE will work for me. | progress can do this for you — not quite a progress bar, but it will show progress (as a percentage) and the current file being processed (when multiple files are processed): gpg ... &progress -mp $! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153468/"
]
} |
288,808 | When I run $ update-alternatives --config java I get a few rows: What is the difference between auto mode and manual mode? | In a nutshell, update-alternatives : in Auto Mode, will select the generic name of the program automatically based on the Priority value of the alternatives; The one with the highest priority gets set as the generic name. in Manual Mode, will set the generic name as the user selected alternative irrespective of the Priority value of the alternatives, hence the name "manual". Check this: % sudo update-alternatives --config editorThere are 5 choices for the alternative editor (providing /usr/bin/editor). Selection Path Priority Status------------------------------------------------------------ 0 /bin/nano 40 auto mode 1 /bin/ed -100 manual mode 2 /bin/nano 40 manual mode* 3 /usr/bin/emacs24 0 manual mode 4 /usr/bin/vim.basic 30 manual mode 5 /usr/bin/vim.tiny 10 manual mode Note that, /bin/nano is both available in auto and manual mode. If the link group were set in auto mode then the alternative with the highest priority i.e. /bin/nano (priority 40) would be selected as the generic name i.e. /usr/bin/editor . This is the default until the user introduces any change to the link group. On the other hand, in the manual mode, you can select any alternative as the generic name e.g. in the example, i have /usr/bin/emacs24 set as the generic /usr/bin/editor . You can select any one you like by using the Selection number on the left of the option. Now I can revert back from the manual mode to auto mode by selecting 0 from the above or by: sudo update-alternatives --auto editor | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174362/"
]
} |
288,871 | When I SSH in to our server, control + arrow sequences are working fine for me: Ctrl + V then Ctrl + up , down , right , left gives ^[OA , ^[OB , ^[OC , and ^[OD respectively. In tmux, I get ^[[A , ^[[B , ^[[C , and ^[[D . I'm connecting to Ubuntu via PuTTY, but … I have the correct terminal set ( putty because ncurses-term is installed). All other key combinations are working in the main shell. I'm using the right character set (UTF-8). I'm definitely getting a 256-colour terminal (I've tried multiple invocations of TERM=... tmux ) I've read the tmux FAQs that say to use this config: set -g terminal-overrides "xterm*:kLFT5=\eOD:kRIT5=\eOC:kUP5=\eOA:kDN5=\eOB:smkx@:rmkx@" making my config like so: set -g default-terminal "screen-256color" set -g terminal-overrides "screen*:kLFT5=\eOD:kRIT5=\eOC:kUP5=\eOA:kDN5=\eOB:smkx@:rmkx@" set-window-option -g xterm-keys on since screen was recommended elsewhere (though I tried putty there, too. The only thing I've found that worked is running tput rmkx within tmux , but I don't know if that's the correct solution, what other effects it has, if other programs will change this setting, or even how it should be set correctly in .tmux.conf so that I don't have to type it in manually all the time. | This is similar to How to enable Control key combinations for GNU screen on putty? , but addresses a different aspect. In a quick check, it seems that the problem is a conflict between this line set-window-option -g xterm-keys on and this: set -g terminal-overrides "screen*:kLFT5=\eOD:kRIT5=\eOC:kUP5=\eOA:kDN5=\eOB:smkx@:rmkx@" Dropping the set-window-option makes your configuration work for me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161288/"
]
} |
288,886 | Im trying double loop using array values like array names for loop array1="name1 name2"name1="one two"name2="red blue"for name in $array1do for value in $name do echo "$name - $value" donedone I need to use 'name' to '$name' for use in 2nd loop, but this don't work for me. How could I use value of array1 like the name of array inside 2nd loop? | That's not how you define arrays in bash . a="foo bar" defines a string/scalar variable. And using it as $a (unquoted) performs the split+glob operator which only makes sense for strings representing a $IFS separated list of file patterns. In bash , arrays are defined as: a=(foo bar) So here, you'd want: array1=(name1 name2)name1=(one two)name2=(red blue)for name in "${array1[@]}"do typeset -n nameref="$name" for value in "${nameref[@]}" do printf '%s\n' "$name - $value" donedone typeset -n is a relatively recent addition to bash and declares a nameref , that is a variable that contains the name of another variable and when expanded actually refers to the named variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174403/"
]
} |
288,934 | I create a file with tab-delimited fields. echo foo$'\t'bar$'\t'baz$'\n'foo$'\t'bar$'\t'baz > input I have the following script named zsh.sh #!/usr/bin/env zshwhile read line; do <<<$line cut -f 2done < "$1" I test it. $ ./zsh.sh inputbarbar This works fine. However, when I change the first line to invoke bash instead, it fails. $ ./bash.sh inputfoo bar bazfoo bar baz Why does this fail with bash and work with zsh ? Additional troubleshooting Using direct paths in the shebang instead of env produces the same behaviour. Piping with echo instead of using the here-string <<<$line also produces the same behaviour. i.e. echo $line | cut -f 2 . Using awk instead of cut works for both shells. i.e. <<<$line awk '{print $2}' . | What happens is that bash replaces the tabs with spaces. You can avoid this problem by saying "$line" instead, or by explicitly cutting on spaces. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/288934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
288,962 | When I used shell in a box and when I call less command ( echo foo | less ) in ajax response there was this code: \u001B[?1049h\u001B[?1h\u001B=\rfoo\r\n\u001B[7m(END)\u001B[27m\u001B[K what does \u001B[?1049h and \u001B[?1h escape sequences do, also what is \u001B= ? Are they documented somewhere? | \u001B is an unnecessarily verbose ASCII escape character, which seems to have been introduced for ECMAScript6 . POSIX would use octal \033 , and some others allow hexadecimal \01b . The upper/lower case of the number is irrelevant. The \u001B[?1049h (and \u001B[?1049l ) are escape sequences which tell xterm to optionally switch to and from the alternate screen. The question mark shows that it is "private use" (a category set aside for implementation-specific features in the standard). About a third of the private-use modes listed in XTerm Control Sequences correspond to one of DEC's (those have a mnemonic such as DECCKM in their descriptions). The others are either original to xterm, or adapted from other terminals, as noted. The reason for this escape sequence is to provide a terminfo-based way to let users decide whether programs can use the alternate screen. According to the xterm manual : titeInhibit (class TiteInhibit ) Specifies whether or not xterm should remove ti and te termcap entries (used to switch between alternate screens on startup of many screen-oriented programs) from the TERMCAP string. If set, xterm also ignores the escape sequence to switch to the alternate screen. Xterm supports terminfo in a different way, supporting composite control sequences (also known as private modes) 1047 , 1048 and 1049 which have the same effect as the original 47 control sequence. The default for this resource is "false". The 1049 code (introduced in 1998 ) is recognized by most terminal emulators which claim to be xterm-compatible, but most do not make the feature optional . So they don't really implement the feature. On the other hand, \u001B[?1h did not originate with xterm, but (like \u001B= ) is from DEC VT100s, used for switching the terminal to use application mode for cursor keys (DECCKM) and the numeric keypad (DECKPAM). These are used by programs such as less when initializing the terminal because terminal descriptions use application (or normal) mode escape sequences for special keys to match the initialization strings given in these terminal descriptions. Further reading: Why doesn't the screen clear when running vi? (xterm FAQ) Why can't I use the cursor keys in (whatever) shell? (xterm FAQ) My cursor keys do not work (ncurses FAQ) XTerm Control Sequences | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/288962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
289,024 | I want to use the following command to do a remote backup of /home : duplicity full /home sftp://[email protected]/home/user/backup When the command is run I get the following output: Local and Remote metadata are synchronized, no sync needed.Last full backup date: noneGnuPG passphrase: Retype passphrase to confirm: --------------[ Backup Statistics ]--------------StartTime 1465578990.15 (Fri Jun 10 19:16:30 2016)EndTime 1465578990.22 (Fri Jun 10 19:16:30 2016)ElapsedTime 0.07 (0.07 seconds)SourceFiles 75SourceFileSize 118644 (116 KB)NewFiles 75NewFileSize 118644 (116 KB)DeletedFiles 0ChangedFiles 0ChangedFileSize 0 (0 bytes)ChangedDeltaSize 0 (0 bytes)DeltaEntries 75RawDeltaSize 110452 (108 KB)TotalDestinationSizeChange 35295 (34.5 KB)Errors 0------------------------------------------------- But no files are stored on the remote host. If I change the destination sftp://[email protected]/home/user/backups in the command above to for example file:///home/user/backup the backup files are stored locally as expected, and I get the same terminal output as above. What puzzles me more is that if I change the destination to some url that is definitely not writable on the remote host, I still get the message above saying Errors 0 , but of course no files are transferred to the remote host. What am I doing wrong? Why can I do a local backup but not a remote one, and why are there no error message when the files are not transferred to the remote host? Additional info: Tried to run the command with the --verbosity 9 switch and a directory that doesn't exist on the remote host set as the destination dir: [...]AsyncScheduler: running task synchronously (asynchronicity disabled)ssh: [chan 1] open('/var/httpd.www/home/notExistingDir/duplicity-full.20160610T173142Z.vol1.difftar.gpg', 'wb')ssh: [chan 1] open('/var/httpd.www/home/notExistingDir/duplicity-full.20160610T173142Z.vol1.difftar.gpg', 'wb') -> 00000000ssh: [chan 1] close(00000000)ssh: [chan 1] stat('/var/httpd.www/home/notExistingDir/duplicity-full.20160610T173142Z.vol1.difftar.gpg')Deleting /tmp/duplicity-gYlv_8-tempdir/mktemp-MOjDuP-2Forgetting temporary file /tmp/duplicity-gYlv_8-tempdir/mktemp-MOjDuP-2AsyncScheduler: task completed successfullyProcessed volume 1[...] | looks like you backed up to ~user/home/user/backup on the target machine. try (notice the extra slash signalling an absolute path) duplicity full /home sftp://[email protected]//home/user/backup or alternatively duplicity full /home sftp://[email protected]/backup . ..ede/duply.net | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36186/"
]
} |
289,099 | I duplicated a hard disk to a new larger one using the method suggested in Full DD copy from hdd to hdd . After doing that df -h reports the original and smaller partition sizes from the original disk and gparted highlights the disparity and offers to fix them, though it seems unwise as they are mounted. If you look closely at the image you can see that Used + Unused < Size for the partitions with the yellow warning signs. What command line tools can be used to fix the issue, and will it be safe for gparted to do it on mounted partition live? Ideally I should have done that before switching over to the target disk and rebooting from it. Below is the information dialog from gparted about the discrepancy and I edited the title to describe it better. | If gparted only has to extend the partition or filesystem into unused space (immediately following the partition), then it should be safe to let it extend the partition and/or fs. If, however, it has to MOVE any partitions around to make space for resizing, you'll have to boot with a gparted Live CD See the man page for resize2fs (which is the command-line tool gparted will use to grow an ext2, ext3, and ext4 filesystem) for more details about resizing those filesystems. For ext2/3/4, growing a filesystem is generally not a problem and can safely be done while the fs is mounted. Shrinking a filesystem, however, is more troublesome and should be done while the fs is unmounted. If it's the rootfs, that means booting to a rescue CD/USB/PXE etc. BTW, both dd and cat are amongst the worst ways to copy a linux system to another hard disk. Use Clonezilla , that's what it's for. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
289,127 | My small home server runs on a distribution featuring ZFS.On that system, I implemented a rolling snapshot scheme: every hour, a snapshot is created once a day, the chain is thinned so that I have a set of hourly / daily / weekly / monthly snapshots I would like to store an offsite backup of some of the file systems on a USB drive in my office. The plan is to update the drive every other week.However, due to the rolling snapshot scheme, I have troubles implementing incremental snapshots. To given you an illustration, this is my desired procedure: Initial snapshot: zfs snap tank/fs@snap0 Transfer initial snapshot: zfs send tank/fs@snap0 | zfs recv -Fduv backup_tank Store backup_tank offsite Make a few snapshots: zfs snap tank/fs@snap1 , zfs snap tank/fs@snap2 Thin the chain: zfs destroy tank/fs@snap0 Return backup_tank and make an incremental update of the filesystem Obviously, zfs send -I snap0 tank/fs@snap2 | zfs recv -Fduv backup_tank fails as snap0 does not exist on tank anymore. Long story cut short: Is there a clever solution for combining thinning of snapshot chains and incremental send / recv ? Every time I attach the drive and run some commands I would like to a have a copy of the file system at that point of time. In this example, backup_tank should contain the snapshots fs@snap1 and fs@snap2 . | You can't do exactly what you want. Whenever you create a zfs send stream, that stream is created as the delta between two snapshots. (That's the only way to do it as ZFS is currently implemented.) In order to apply that stream to a different dataset, the target dataset must contain the starting snapshot of the stream; if it doesn't, there is no common point of reference for the two. When you destroy the @snap0 snapshot on the source dataset, you create a situation that is impossible for ZFS to reconcile. The way to do what you are asking is to keep one snapshot in common between both datasets at all times, and use that common snapshot as the starting point for the next send stream. So, you might in step 1 create a snapshot @backup0, and then some time around step 6 create and use a snapshot @backup1 to use for updating the off-site backup. You then transfer the stream that is the delta between @backup0 and @backup1 (which will include all intermediate snapshots), then delete @backup0 but keep @backup1 (which becomes the new common denominator). Next time you refresh the backup, you might create @backup2 (instead of @backup1) and transfer the delta between @backup1 and @backup2 (instead of @backup0 and @backup1) followed by deleting @backup1 (instead of @backup0), and so on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142988/"
]
} |
289,178 | I know about Linux terminal. I can also issue many advanced commands over terminal. But one simple concept is not clear to me. What is terminal and how does it work? I know about hardware which consists of CPU, RAM, HARD DISK and so on. I know about kernel which is basically the core of the operating system. I know about software which sits on the top of kernel. And I know about users. And I know that user uses either terminal or GUI to give instructions to the software.(or kernel?) Now please explain these concepts of terminal and shell. Graphical explanation and simple non-technical words are preferable. | What is shell? In simple words, shell is a software which takes the command from your keyboard and passes it to the OS. So are konsole, xterm or gnome-terminals shells? No, they're called terminal emulators. They open a GUI to interact with the shell. You can think of them as a frontend to the shells. Different Shells There are different shells which are more or less same but the features and syntaxes are different. Bourne shell The most basic shell available on all UNIX systems Korn Shell Based on the Bourne shell with enhancements C Shell Similar to the C programming language in syntax Bash Shell Bourne Again Shell combines the advantages of the Korn Shell and the C Shell. The default on most Linux distributions. tcsh Similar to the C Shell | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174603/"
]
} |
289,211 | Multiple sessions of the same user. When one of them gets to the point that it can no longer run new programs, none of them can, not even a new login of that user. Other users can still run new programs just fine, including new logins. Normally user limits are in limits.conf, but its documentation says "please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session." I'm nowhere close to running out of ram (44GB available), but I can't figure out what else to look at. What limits exist that would have a global effect on all sessions using the same UID, but not other UIDs? Edited on 6/12/16 at 8:45p to add: While writing the below I realized that the problem could be X11 related. This user account on this box is used nearly exclusively for GUI applications. Is there a good text based program I can try to run from bash that will use lots of resources and give good error messages? The box does not get to the point where it cannot even run ls. Unfortunately, the GUI programs this problem normally affects (Chrome and Firefox) do not do a good job of leaving error messages behind. Chrome tabs will start showing up blank or with the completely useless "Aw, Snap!" error. Firefox simply will refuse to start. The only even partially helpful error messages I managed to obtain came from trying to start Firefox from bash: [pascal@firefox ~]$ firefox --display=:0 --safe-modeAssertion failure: ((bool)(__builtin_expect(!!(!NS_FAILED_impl(rv)), 1))) && thread (Should successfully create image decoding threads), at /builddir/build/BUILD/firefox-45.2.0/firefox-45.2.0esr/image/DecodePool.cpp:359#01: ???[/usr/lib64/firefox/libxul.so +0x10f2165]#02: ???[/usr/lib64/firefox/libxul.so +0xa2dd2c]#03: ???[/usr/lib64/firefox/libxul.so +0xa2ee29]#04: ???[/usr/lib64/firefox/libxul.so +0xa2f4c1]#05: ???[/usr/lib64/firefox/libxul.so +0xa3095d]#06: ???[/usr/lib64/firefox/libxul.so +0xa52d44]#07: ???[/usr/lib64/firefox/libxul.so +0xa4c051]#08: ???[/usr/lib64/firefox/libxul.so +0x1096257]#09: ???[/usr/lib64/firefox/libxul.so +0x1096342]#10: ???[/usr/lib64/firefox/libxul.so +0x1dba68f]#11: ???[/usr/lib64/firefox/libxul.so +0x1dba805]#12: ???[/usr/lib64/firefox/libxul.so +0x1dba8b9]#13: ???[/usr/lib64/firefox/libxul.so +0x1e3e6be]#14: ???[/usr/lib64/firefox/libxul.so +0x1e48d1f]#15: ???[/usr/lib64/firefox/libxul.so +0x1e48ddd]#16: ???[/usr/lib64/firefox/libxul.so +0x20bf7bc]#17: ???[/usr/lib64/firefox/libxul.so +0x20bfae6]#18: ???[/usr/lib64/firefox/libxul.so +0x20bfe5b]#19: ???[/usr/lib64/firefox/libxul.so +0x21087cd]#20: ???[/usr/lib64/firefox/libxul.so +0x2108cd2]#21: ???[/usr/lib64/firefox/libxul.so +0x210aef4]#22: ???[/usr/lib64/firefox/libxul.so +0x22578b1]#23: ???[/usr/lib64/firefox/libxul.so +0x228ba43]#24: ???[/usr/lib64/firefox/libxul.so +0x228be1d]#25: XRE_main[/usr/lib64/firefox/libxul.so +0x228c073]#26: ???[/usr/lib64/firefox/firefox +0x4c1d]#27: ???[/usr/lib64/firefox/firefox +0x436d]#28: __libc_start_main[/lib64/libc.so.6 +0x21b15]#29: ???[/usr/lib64/firefox/firefox +0x449d]#30: ??? (???:???)Segmentation fault[pascal@firefox ~]$ firefox --display=:0 --safe-mode -g1465632860286DeferredSave.extensions.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860287addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860288addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860290addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860358DeferredSave.addons.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:91465632860359addons.repositoryERRORSaveDBToDisk failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9Segmentation fault[pascal@firefox ~]$ [pascal@localhost ~]$ ulimit -aHcore file size (blocks, -c) unlimiteddata seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 579483max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 65536pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) unlimitedcpu time (seconds, -t) unlimitedmax user processes (-u) 579483virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited[pascal@localhost ~]$ ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 579483max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 32768pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 4096virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited[pascal@localhost ~]$ set /proc/*/task/*/cwd/.; echo $#306[pascal@localhost ~]$ prlimitRESOURCE DESCRIPTION SOFT HARD UNITSAS address space limit unlimited unlimited bytesCORE max core file size 0 unlimited blocksCPU CPU time unlimited unlimited secondsDATA max data size unlimited unlimited bytesFSIZE max file size unlimited unlimited blocksLOCKS max number of file locks held unlimited unlimitedMEMLOCK max locked-in-memory address space 65536 65536 bytesMSGQUEUE max bytes in POSIX mqueues 819200 819200 bytesNICE max nice prio allowed to raise 0 0NOFILE max number of open files 32768 65536NPROC max number of processes 4096 579483RSS max resident set size unlimited unlimited pagesRTPRIO max real-time priority 0 0RTTIME timeout for real-time tasks unlimited unlimited microsecsSIGPENDING max number of pending signals 579483 579483STACK max stack size 8388608 unlimited bytes Edited on 6/13/16 at 10:24p to add: Not a GUI problem. When I tried to su to the user today, that doesn't even work. Root is fine. I can ls, vi, create a new user, su to that user, everything works fine for that user, I exit and try to su to the problem user and no go. Bash kinda loaded the first time, but even exit didn't work. I had to reconnect to get back to root. [root@firefox ~]# su - pascalLast login: Sat Jun 11 03:08:47 CDT 2016 on pts/1-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: Resource temporarily unavailable-bash-4.2$ ls-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: Resource temporarily unavailable-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: Resource temporarily unavailable-bash-4.2$ exitlogout-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: retry: No child processes-bash: fork: Resource temporarily unavailable-bash-4.2$ [root@firefox ~]# ls -l /total 126lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin---- snip ----drwxr-xr-x. 19 root root 23 May 27 18:03 var[root@firefox ~]# vi /etc/rc.local[root@firefox ~]# useradd test[root@firefox ~]# su - test[test@firefox ~]$ cd[test@firefox ~]$ ls -ltotal 0[test@firefox ~]$ ls -l /total 126lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin---- snip ----drwxr-xr-x. 19 root root 23 May 27 18:03 var[test@firefox ~]$ vi /etc/rc.local[test@firefox ~]$ exitlogout[root@firefox ~]# su - pascalLast login: Mon Jun 13 22:12:12 CDT 2016 on pts/1su: failed to execute /bin/bash: Resource temporarily unavailable[root@firefox ~]# | nproc was the problem: [root@localhost ~]# ps -eLf | grep pascal | wc -l4068[root@localhost ~]# cat /etc/security/limits.d/20-nproc.conf# Default limit for number of user's processes to prevent# accidental fork bombs.# See rhbz #432903 for reasoning.* soft nproc 4096root soft nproc unlimited[root@localhost ~]# man limits.conf states: Also, please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session. One exception is the maxlogin option, this one is system wide. But there is a race, concurrent logins at the same time will not always be detected as such but only counted as one. It appears to me that nproc is only enforced per login but counts globally. So a login with nproc 8192 and 5000 threads would have no problems, but a simultaneous login of the same UID with nproc 4096 and 50 threads would not be able to create more because the global count (5050) is above its nproc setting. [root@localhost ~]# ps -eLf | grep pascal | grep google/chrome | wc -l3792 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174645/"
]
} |
289,217 | I wrote the following shell script for a pg backup: #!/bin/bashPG_USER=donatoDATABASE=mydbSERVER=216.58.219.174DIR="$HOME/pg_bak"DATE=$(date +"%m_%d_%y")FILE="$DATABASE_$DATE"ERROR_FILE="$HOME/pg_bak/error_bak/$FILE_error.txt"# pass @ .pgpassPG_BAK_NOW () { pg_dump -h $SERVER -U $PG_USER $DATABASE >$FILE 2>$ERROR_FILE code=$? if [ $code -ne 0 ]; then echo 1>&2 "The backup failed (exit code $code), check for errors in $ERROR_FILE" fi}echo "Ready to dump to $FILE" >> "$HOME/pg_status" cd $DIRif [ -f "$FILE" ];then rm $FILE PG_BAK_NOW else PG_BAK_NOWfi When I execute it, I know it executes for a bit of time: $ pgrep -fl pg_bak.sh4603 pg_bak.sh But then it does crash: $ ./pg_bak.shThe backup failed (exit code 1), check for errors in /home/viggy/pg_bak/error_bak/.txt Notice the .txt part. The name of the error file was supposed to be mydb_6_11_2016_error.txt , not .txt . Why did the bash script not interpolate the variable $FILE and the hardcoded string "_error"? It did interpolate $FILE in the dump file correctly, but not the error file. Why? | A very common mistake. This is missing curly braces: ERROR_FILE="$HOME/pg_bak/error_bak/$FILE_error.txt" and is fixed by: ERROR_FILE="$HOME/pg_bak/error_bak/${FILE}_error.txt" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173692/"
]
} |
289,250 | POSIX defines the behavior of tools such as grep , awk , sed , etc which work against text files.Since it is a text file, I think there is the problem(s) of character encoding. Question: What is the character encodings supported by POSIX?(or, text files of what encoding can be handled by POSIX compiant systems?) | There is no specific character encoding mandated by POSIX. The only character in a fixed position is null, which must be 00. What POSIX does require is that all characters from its Portable Character Set exist. The Portable Character Set contains the printable ASCII characters, space, BEL, backspace, tab, carriage return, newline, vertical tab, form feed, and null. Where or how those are encoded is not specified, except that: They are all a single byte (8 bits). Null is represented with all bits zero. The digits 0-9 appear contiguously in that order. It imposes no other restrictions on the representation of characters, so a conforming system is free to support encodings with any representation of those characters, and any other characters in addition. Different locales on the same system can have different representations of those characters, with the exception of . and / , and if an application uses any pair of locales where the character encodings differ, or accesses data from an application using a locale which has different encodings from the locales used by the application, the results are unspecified. The only files that all POSIX-compliant systems are required to treat in the same way are files consisting entirely of null bytes. Files treated as text have their lines terminated by the encoding's representation of the PCS's newline character . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/289250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157713/"
]
} |
289,364 | I'm trying to send commands to a tcp port using netcat and pipe response when I run netcat and type my command it prints response correctly but when I pass command from a pipe it sends the command correctly but doesn't print response So, this works correctly: netcat localhost 9009 while this just sends command but doesn't print response: echo 'my_command' | netcat localhost 9009 why? How can I make netcat to print response text ? | As @Patrick said, this problem is usually due to netcat exiting before the response has been given. You remedy that by adding -q 2 to the command line, i.e., tell netcat to hang around 2 seconds after detecting EOF on standard input. Obviously you can make it wait some other number of seconds as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11920/"
]
} |
289,385 | I tried removing the '.' directory. I thought I could just delete my working directory without having to go into a parent directory. The point of my question is to look for some insight into how the linux system works to delete files. | Removing the current directory does not affect the file system integrity or its logical organization. Preventing . removal is done to follow the POSIX standard which states in the rmdir(2) manual page: If the path argument refers to a path whose final component is either dot or dot-dot, rmdir() shall fail. One rationale can be found in the rm manual page: The rm utility is forbidden to remove the names dot and dot-dot in order to avoid the consequences of inadvertently doing something like: rm -r .* On the other hand, explicitly removing the current directory (i.e. by stating its full or relative path) is an allowed operation under Unix, at least since SVR3 as it was forbidden with Unix version 7 until SVR2. This is very similar to what happens when you remove a file that is actively being read or written to. Processes accessing the delete file continue their read and write operations just like if nothing happened. After you have removed a process current directory, this directory is no more accessible though its path but its inode stay present on the file system until the process dies or change its own directory. Note that the process won't be able to use a path relative to its current directory to change its cwd (e.g. cd .. ) because there is no more a .. entry in its current directory. When someone type rmdir . , they likely expect the current directory entry to be removed but when a directory is removed (using its path), three directory entries are actually removed, . , .. , and the directory itself. Removing only . and not this directory's directory entry would create a non compliant directory but as already stated, it is forbidden by the standard. As @Emmanuel rightly pointed out, there is a second reason why removing . is not allowed. There is at least one POSIX compliant OS (Mac OS X with HFS+) that, with strong restrictions, supports creating hardlinks to existing directories. In such case, there is no clear way from inside the directory to know which hardlink is the one expected to be removed. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/289385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142064/"
]
} |
289,389 | When I am looking to create a new partition table, I have the following options: aix amiga bsd dvh gpt mac msdos pc98 sun loop The default in gparted appears to be msdos which I guess is an 'MBR' partition table. However gpt is more recent, but has less Windows support. I've used Linux for a long time, but I've never really looked into partitioning. What are the various options and their differences? Is there a recommended one for Linux-only disks? | The options correspond to the various partitioning systems supported in libparted ; there's not much documentation , but looking at the source code : aix provides support for the volumes used in IBM’s AIX (which introduced what we now know as LVM); amiga provides support for the Amiga’s RDB partitioning scheme; bsd provides support for BSD disk labels; dvh provides support for SGI disk volume headers; gpt provides support for GUID partition tables; mac provides support for old (pre-GPT) Apple partition tables; msdos provides support for DOS-style MBR partition tables; pc98 provides support for PC-98 partition tables; sun provides support for Sun’s partitioning scheme; loop provides support for raw disk access (loopback-style) — I’m not sure about the uses for this one. As you can see, the majority of these are for older systems, and you probably won’t need to create a partition table of any type other than gpt or msdos . For a new disk, I recommend gpt : it allows more partitions, it can be booted even in pre-UEFI systems (using grub ), and supports disks larger than 2 TiB (up to 8 ZiB for 512-byte sector disks). Actually, if you don’t need to boot from the disk, I’d recommend not using a partitioning scheme at all and simply adding the whole disk to mdadm , LVM, or a zpool, depending on whether you use LVM (on top of mdadm or not) or ZFS. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/289389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
289,391 | Most of the files are gone, but I'm still left with these two files: ".RData" and ".Rhistory" Why is this the case? I'm working with R, but I don't know what those files are. Afterwards, I can individually remove them without needing to use sudo. | * only includes visible files. If you want to delete both those and the hidden ones, use: rm -rf * .* The dotglob option With bash, we can change this behavior and unhide files. To illustrate, let's create two files, one hidden and one not: $ touch unhidden .hide1$ ls *unhidden As you can see, only the unhidden one is shown by ls * . Now let's set the dotglob option: $ shopt -s dotglob$ ls *.hide1 unhidden Both files appear now. We can, of course, turn dotglob off if we want: $ shopt -u dotglob$ ls *unhidden Documentation From man bash : When a pattern is used for pathname expansion, the character "." at the start of a name or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. When matching a pathname, the slash character must always be matched explicitly. In other cases, the ``.'' character is not treated specially. See the description of shopt below under SHELL BUILTIN COMMANDS for a description of the nocaseglob, nullglob, failglob, and dotglob shell options. In other words, pathname expansion ignores files whose names begin with . unless the . is explicitly specified. Safety issues To avoid unpleasant surprises, rm will refuse to remove the current directory . and the parent directory .. even if you specify them on the command line: $ rm -rf .*rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘.’rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘..’ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/289391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142064/"
]
} |
289,435 | From manpage of top VIRT -- Virtual Memory Size (KiB) The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out and pages that have been mapped but not used.USED -- Memory in Use (KiB) This field represents the non-swapped physical memory a task has used (RES) plus the non-resident portion of its address space (SWAP). It seems to me that VIRT and USED mean the same, i.e. they are both the sum of what a process occupies in the physical memory and what in the swap. So what are their differences and relations? By the way, by default, top doesn't show USED. How can I make it visible? | RES is the amount of RAM currently used by the process. This value can vary because memory pages might be swapped in or out. It might even be 0 for a process that has been sleeping for a long time, e.g. an unsolicited daemon. VIRT is the full size of all memory the process is using, whether in RAM or on disk (shared objects, mmaped files, swap area) so VIRT is always larger than or equal to RES. A process is always dealing with (i.e. allocating / accessing / freeing) virtual memory. It is up to the operating system to map some or all of these pages to RAM. USED is less than VIRT because it doesn't include the memory that is backed by something else than swap, for example code and libraries. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
289,475 | The PackageKit Project dialog keeps popping up to prompt for the root password on my Debian Jessie desktop, apparently assuming that the logged in account has root permissions or knows the root password Is there someway to disable it in the desktop settings? | Are you using KDE? I had this problem the other day, and systemctl stop/disable packagekit didn't help at all. Here's the prompt: In this example, polkit.subject-pid is PID 2201, which is: username 2201 0.0 0.1 1354816 24440 ? Sl Oct27 2:46 kded5 [kdeinit5] Which suggests that KDE might be doing something. On my system, KDE doesn't have package management settings in the system settings tool, but opening apper's settings menu I found this: Setting this to Never took care of the problem for me. Update: systemctl mask packagekit works as well. See for example http://0pointer.de/blog/projects/three-levels-of-off , which describes the difference between systemctl stop, disable, and mask. mask makes services completely unstartable until they are unmask ed again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
289,499 | For example: $ node-bash: /usr/local/bin/node: No such file or directory$ foo-bash: foo: command not found What's the difference? In both cases, node and foo are invalid commands, but it seems like Unix just can't find the node binary? When uninstalling a program, e.g. node , is there a way to clean this up so that I get $ node-bash: node: command not found EDIT: Results from type command: $ type nodenode is hashed (/usr/local/bin/node)$ type foo-bash: type: foo: not found | That's because bash remembered your command location, store it in a hash table. After you uninstalled node , the hash table isn't cleared, bash still thinks node is at /usr/local/bin/node , skipping the PATH lookup, and calling /usr/local/bin/node directly, using execve() . Since when node isn't there anymore, execve() returns ENOENT error, means no such file or directory, bash reported that error to you. In bash , you can remove an entry from hash table: hash -d node or remove the entire hash table ( works in all POSIX shell ): hash -r | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/289499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58080/"
]
} |
289,516 | I'm using the sh -c exec idiom in an ExecStart statement (in a service unit file) to interpolate some shell commands. For example: ExecStart=/bin/sh -ec "exec /usr/bin/foo $(/usr/bin/foo-config)" It works great. However, when I look at the journal for this service, the process name is sh instead of foo . Is there a way to lie about the process name using this idiom? | Ah, this turned out to be much easier than I thought it would be. Found an answer here: https://unix.stackexchange.com/a/229525/11995 ! SyslogIdentifier=foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11995/"
]
} |
289,519 | Looking for some help, I have two files, one is a large list of various names the other is coordinates. both files start each linewith an 8 digit code. I want to lookup 8 digit line code from File1 and copy the line contents to all matching line codes in File2. (File1) Only one occurence of hash / name. 136667ED ap1_01_a_ap1_01_rails_07035B337C ap1_01_a_arrows_00579546F82 ap1_01_a_centreline_0100E1D31E7 prop_bush_med_02 (File2) Some have multiple hash copies like 0E1D31E7, with different coordinates. 136667ED -1294.6945,-2376.0317,21.8279035B337C -1314.6719,-2721.7378,12.946779546F82 -1283.1066,-2529.9771,12.96350E1D31E7 1919.4160,-1814.3889,160.52100E1D31E7 1919.9885,-2628.2529,0.7537 0E1D31E7 192.0235,-2603.1790,4.9978 0E1D31E7 192.1050,4950.3540,389.4736 Below is how I would like them, The 8 Digit code / name, copied into any code line match in file 2. 136667ED -1294.6945,-2376.0317,21.8279 136667ED ap1_01_a_ap1_01_rails_07 035B337C -1314.6719,-2721.7378,12.9467 035B337C ap1_01_a_arrows_005 79546F82 -1283.1066,-2529.9771,12.9635 79546F82 ap1_01_a_centreline_010 0E1D31E7 1919.4160,-1814.3889,160.5210 0E1D31E7 prop_bush_med_02 0E1D31E7 1919.9885,-2628.2529,0.7537 0E1D31E7 prop_bush_med_02 0E1D31E7 192.0235,-2603.1790,4.9978 0E1D31E7 prop_bush_med_02 0E1D31E7 192.1050,4950.3540,389.4736 0E1D31E7 prop_bush_med_02 Join lines of text with repeated beginning This may work, I dont know how to run any of these commands. I'm using windows. | Ah, this turned out to be much easier than I thought it would be. Found an answer here: https://unix.stackexchange.com/a/229525/11995 ! SyslogIdentifier=foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174719/"
]
} |
289,546 | I have a table in which each entry looks something like, coagulation factor VIII-associated 1 /// coagulation factor VIII-associated 2 /// coagulation factor VIII-associated 3 I would like to use cut -d/// -f2 myfile.txt , but I'm getting an error: cut: bad delimiter Same case when I use single quotes or double quotes around the delimiter: cut -d'///' -f2 myfile.txt cut -d"///" -f2 myfile.txt Do I have to escape the slash somehow? If so, what is the escape character for cut? Documentation doesn't seem to have that information, and I tried \. | If the delimiter is anything other than one fixed character , then cut is the wrong tool. Use awk instead. Consider this test file which has three fields: $ cat fileone///two/2//two///three To print the second field and only the second field: $ awk -F/// '{print $2}' filetwo/2//two | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18450/"
]
} |
289,563 | I am using an embedded Arm with a Debian build. How does one list the compiled devices from the device tree? I want to see if a device is already supported. For those reading this, the "Device Tree" is a specification/standard for adding devices to an (embedded) Linux kernel. | The device tree is exposed as a hierarchy of directories and files in /proc . You can cat the files, eg: find /proc/device-tree/ -type f -exec head {} + | less Beware, most file content ends with a null char, and some may contain other non-printing characters. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/289563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109021/"
]
} |
289,566 | I'm new to Linux. I have Ubuntu 16.04 LTS installed. I'm trying to install Skype. I downloaded the package: skype-ubuntu-precise_4.3.0.37-1_i386 When I open the package, I'm taken to a window that gives information about Skype and has an "install" button. When I click "install", it says "installing" for a fraction of a second, before reverting back to "install" - but nothing has happened. There is no error message or anything giving me any further information. I am running a 64 bit system. How can I install Skype? | To install skype you need to Enable The Canonical Partners Repository open the terminal and type the following command: sudo add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" To perform an installation of i386 package on 64-bit system we need to enable multi-architecture to support both platforms. sudo dpkg --add-architecture i386 Update and install skype: sudo apt-get updatesudo apt-get install skype Also you can install it manually. Download the .deb package from the official website (e,g): wget http://download.skype.com/linux/skype-ubuntu-precise_4.3.0.37-1_i386.debsudo dpkg --add-architecture i386sudo apt-get updatesudo dpkg -i skype-ubuntu-precise_4.3.0.37-1_i386.debsudo apt-get -f install | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174132/"
]
} |
289,574 | A StackOverflow answer with > 3.5K votes features this one-liner for assigning to DIR the directory of the current bash script: DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" I'm puzzled by the nested double-quotes. As far as I can tell, the following fragments are double-quoted: "$( cd ""${BASH_SOURCE[0]}"" && pwd )" ...and everything else to the right of the = (i.e. $( dirname and ) ) is unquoted. In other words, I assume that the 2nd, 4th, and 6th " characters "close" the 1st, 3rd, and 5th " characters, respectively. I understand what the double-quotes in "${BASH_SOURCE[0]}" achieve, but what's the purpose of the other two pairs of double-quotes? If, on the other hand (and the high vote score notwithstanding), the above snippet is incorrect, what's the right way to achieve its nominal intent? (By nominal intent I mean: collect the value returned by pwd after first cd -ing to the directory returned by dirname "${BASH_SOURCE[0]}" , and do the cd -ing in a sub-shell, so that the $PWD of the parent shell remains unchanged). | Once one is inside $(...) , quoting starts all over from scratch. In other words, "..." and $(...) can nest within each other. Command substitution, $(...) , can contain one or more complete double-quoted strings. Also, double-quoted strings may contain one or more complete command substitutions. But, they do not interlace. Thus, a double-quoted string that starts inside a command substitution will never extend outside of it or vice versa. So, consider: DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" Inside the inner $(...) is: dirname "${BASH_SOURCE[0]}" In the above ${BASH_SOURCE[0]} is double-quoted. Any quotes, double or single, outside of the $(...) are irrelevant when determining that ${BASH_SOURCE[0]} is double-quoted. The outer $(...) contains: cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd Here, the expression $( dirname "${BASH_SOURCE[0]}" ) is double-quoted. The fact that there are quotes outside of the outer $(...) is irrelevant when considering what is inside it. The fact that there are quotes inside the inner $(...) is also irrelevant. Here is how the double-quotes match up: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
289,588 | I'm running virtual machine CentOS 7.2 on Azure and want to install stress tool for some test related to Alert.The point is even though i installed latest repository 7.6, i still wasn't be able to install stress. [root@azure-virtualmachine ~]# yum install epel-releaseLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfile * epel: ftp.iij.ad.jpPackage epel-release-7-6.noarch already installed and latest versionNothing to do[root@azure-virtualmachine ~]# yum install stressLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfile * epel: ftp.riken.jpNo package stress available. Someone tell me how to fix this please? | Stress is not available (yet) in Epel-Repository for RHEL 7. Google will find a RPM you can download from: ftp://fr2.rpmfind.net/linux/dag/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm and install with yum localinstall stress-1.0.2-1.el7.rf.x86_64.rpm . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138782/"
]
} |
289,599 | Is php5.5 or php5.6 Available for CentOS 7? | Stress is not available (yet) in Epel-Repository for RHEL 7. Google will find a RPM you can download from: ftp://fr2.rpmfind.net/linux/dag/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm and install with yum localinstall stress-1.0.2-1.el7.rf.x86_64.rpm . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174787/"
]
} |
289,604 | I am trying to remove a space between 2 strings, they are like this: 312.2 MB123.34 KB487.1 GB I want to change them to: 312.2MB123.34KB487.1GB I've been trying and I can get: echo "312.2 MB" | sed s/[0-9][[:space:]][GMK]//g312.B But when I try to do backreferences with sed : echo "312.2 MB" | sed s/\([0-9]\)[[:space:]]\([GMK]\)/\1/g312.2 MB My guess is that there is only one match, and then the back reference is the complete match, but: echo "312.2 MB" | sed s/\([0-9]\)[[:space:]]\([GMK]\)/TRY/g312.2 MB So, it is not working anymore when I use the () to capture the characters Probably the regex is not completely correct, but I don't know why. | The problem is quoting. Because you don't quote your sed command, the parenthesis \(...\) was interpreted by the shell before passing to sed . So sed treated them as literal parenthesis instead of escaped parenthesis, no back-reference affected. You need: echo "312.2 MB" | sed 's/\([0-9]\)[[:space:]]\([GMK]\)/\1\2/g' to make back-reference affected, and get what you want. Or more simply: echo "312.2 MB" | sed 's/ //' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117626/"
]
} |
289,622 | I use debian and i3wm . How to configure touchpad? I mean "cleck-on-tap", vertical scrolling via right touchpad side. | The easiest way to do it is with synclient . Use: synclient -l to list all options and their current settings and then you can use: synclient var1=value1 [var2=value2] ... to change the different options. To make changes permanent you can either create a script and run it on i3 logon or edit the file /etc/X11/xorg.conf.d (if you don't have it copy it from /etc/X11/xorg.conf.d/50-synaptics.conf ).In /etc/X11/xorg.conf.d you should write your settings under: Section "InputClass"Identifier "touchpad"Driver "synaptics"MatchIsTouchpad "on" In the form: Option "var1" "value1"... For tap-click and vertical scroll on right you should add: Option "TapButton1" "1"Option "VertEdgeScroll" "0" You can also take a look at arch linux wiki page about Touchpad_Synaptics If you want more gestures take a look at touchegg | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157465/"
]
} |
289,625 | I did a mistake when I added a new device in my raidz pool thinking ZFS should do it automatically. :~# zpool status pool: data state: ONLINE scan: resilvered 78,3G in 2h4m with 0 errors on Tue May 10 18:12:31 2016config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 c2t5d0 ONLINE 0 0 0 c2t6d0 ONLINE 0 0 0 c2t7d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 ONLINE 0 0 0 c2t13d0 ONLINE 0 0 0 spares c2t12d0 AVAIL c2t14d0 AVAIL I'm thinking c2t13d0 is not in the raidz pool, is it?How to remove it form the data pool ? Thank for any help. | You can't. It's now in the pool as a single-drive vdev. vdevs can not be removed from a pool. That's the bad news. The worse news is that you've now effectively got a RAID-0 with your raidz2-0 vdev and the c2t13d0 vdev. This is NOT good. It's doubleplusungood. Your options are: to live with the pool you have created backup, destroy and re-create the pool, and restore. Neither option is good. backup/recreate/restore is the right option in the long run, but requires significant downtime (the only way to avoid that is to create a SECOND pool of the same size or larger and zfs send to that). BTW, one thing can do to fix the lack of redundancy is to attach a mirror to the c2t13d0 vdev. Maybe use one of the spares if they're the same size. Use something like: zpool attach data c2t13d0 anotherdisk It's far from ideal but a RAIDZ2 vdev striped with a mirror vdev has redundancy (still an abomination but not one that's going to eat your data), while RAIDZ2 striped with a single-drive doesn't have any reliable redundancy (some of your data will be ONLY on the single-drive vdev. This will invariably turn out to be your most valuable and irreplaceable data). It does make the first option ("live with it") suck a lot less...at least for now. In the long run, you'll want to rebuild your pool. I don't have access to the Solaris man page, but here's a relevant extract from the ZFS On Linux version of the zpool man page (bolding added by me for emphasis). Solaris version should be the same or very similar: zpool attach [-f] [-o property=value] pool device new_device Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration . If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device. If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100826/"
]
} |
289,629 | Note: I wrote an article on Medium that explains how to create a service, and how to avoid this particular issue: Creating a Linux service with systemd . Original question: I'm using systemd to keep a worker script working at all times: [Unit]Description=My workerAfter=mysqld.service[Service]Type=simpleRestart=alwaysExecStart=/path/to/script[Install]WantedBy=multi-user.target Although the restart works fine if the script exits normally after a few minutes, I've noticed that if it repeatedly fails to execute on startup, systemd will just give up trying to start it: Jun 14 11:10:31 localhost systemd[1]: test.service: Main process exited, code=exited, status=1/FAILUREJun 14 11:10:31 localhost systemd[1]: test.service: Unit entered failed state.Jun 14 11:10:31 localhost systemd[1]: test.service: Failed with result 'exit-code'.Jun 14 11:10:31 localhost systemd[1]: test.service: Service hold-off time over, scheduling restart.Jun 14 11:10:31 localhost systemd[1]: test.service: Start request repeated too quickly.Jun 14 11:10:31 localhost systemd[1]: Failed to start My worker.Jun 14 11:10:31 localhost systemd[1]: test.service: Unit entered failed state.Jun 14 11:10:31 localhost systemd[1]: test.service: Failed with result 'start-limit'. Similarly, if my worker script fails several times with an exit status of 255 , systemd gives up trying to restart it: Jun 14 11:25:51 localhost systemd[1]: test.service: Failed with result 'exit-code'. Jun 14 11:25:51 localhost systemd[1]: test.service: Service hold-off time over, scheduling restart. Jun 14 11:25:51 localhost systemd[1]: test.service: Start request repeated too quickly. Jun 14 11:25:51 localhost systemd[1]: Failed to start My worker. Jun 14 11:25:51 localhost systemd[1]: test.service: Unit entered failed state. Jun 14 11:25:51 localhost systemd[1]: test.service: Failed with result 'start-limit'. Is there a way to force systemd to always retry after a few seconds? | I would like to extend Rahul's answer a bit. systemd tries to restart multiple times ( StartLimitBurst ) and stops trying if the attempt count is reached within StartLimitIntervalSec . Both options belong to the [unit] section. The default delay between executions is 100ms ( RestartSec ) which causes the rate limit to be reached very fast. systemd won't attempt any more automatic restarts ever for units with Restart policy defined : Note that units which are configured for Restart= and which reach thestart limit are not attempted to be restarted anymore; however, theymay still be restarted manually at a later point, from which point on,the restart logic is again activated. Rahul's answer helps, because the longer delay prevents reaching the error counter within the StartLimitIntervalSec time. The correct answer is to set both RestartSec and StartLimitBurst to reasonable values though. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/289629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30018/"
]
} |
289,642 | I have a log file which reports on the output of a process, I'd like to extract all lines from between the last occurrence of two patterns. The patterns will be along the lines of; Summary process started at <datestring> and Summary process finished at <datestring> with return code <num> There will be several instances of these patterns throughout the file, along with a lot of other information. I'd like to print the only the last occurrence. I know that I can use: sed -n '/StartPattern/,/EndPattern/p' FileName To get lines between the patterns, but not sure how to get the last instance.Sed or awk solutions would be fine. Edit: I've not been clear at all about the behaviour that I want when multiple StartPatterns appear with no EndPattern, or if there's no EndPattern before the end of file, after detecting a StartPattern For multiple StartPatterns with missing EndPattern, I'd only like lines from the last StartPattern to the EndPattern. For a StartPattern which reaches the EOF without an EndPattern, I'd like everything up to the EOF, followed by inputting a string to warn that EOF was reached. | You can always do: tac < fileName | sed '/EndPattern/,$!d;/StartPattern/q' | tac If your system doesn't have GNU tac , you may be able to use tail -r instead. You can also do it like: awk ' inside { text = text $0 RS if (/EndPattern/) inside=0 next } /StartPattern/ { inside = 1 text = $0 RS } END {printf "%s", text}' < filename But that means reading the whole file. Note that it may give different results if there's another StartPattern in between a StartPattern and the next EndPattern or if the last StartPattern does not have an ending EndPattern or if there are lines matching both StartPattern and EndPattern . awk ' /StartPattern/ { inside = 1 text = "" } inside {text = text $0 RS} /EndPattern/ {inside = 0} END {printf "%s", text}' < filename Would make it behave more like the tac+sed+tac approach (except for the unclosed trailing StartPattern case). That last one seems to be the closest to your edited requirements. To add the warning would simply be: awk ' /StartPattern/ { inside = 1 text = "" } inside {text = text $0 RS} /EndPattern/ {inside = 0} END { printf "%s", text if (inside) print "Warning: EOF reached without seeing the end pattern" > "/dev/stderr" }' < filename To avoid reading the whole file: tac < filename | awk ' /StartPattern/ { printf "%s", $0 RS text if (!inside) print "Warning: EOF reached without seeing the end pattern" > "/dev/stderr" exit } /EndPattern/ {inside = 1; text = ""} {text = $0 RS text}' Portability note: for /dev/stderr , you need either a system with such a special file (beware that on Linux if stderr is open on a seekable file that will write the text at the beginning of the file instead of the current position within the file) or an awk implementation that emulates it like gawk , mawk or busybox awk (those work around the Linux issue mentioned above). On other systems, you can replace print ... > "/dev/stderr" with print ... | "cat>&2" . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83684/"
]
} |
289,685 | How to install VirtualBox Extension Pack to VirtualBox latest version on Linux? I would also like to be able to verify extension pack has been successfully installed and and uninstall it, if I wish. | First, you need to adhere to the VirtualBox Extension Pack Personal Use and Evaluation License . Second, I advise to only install this package if actually needed, here is the description of the VirtualBox Extension Pack functionality: Oracle Cloud Infrastructure integration, USB 2.0 and USB 3.0 Host Controller, Host Webcam, VirtualBox RDP, PXE ROM, Disk Encryption, NVMe. Now, let's download the damn thing: we need to store the latest VirtualBox version into a variable, let's call it LatestVirtualBoxVersion download the latest version of the VirtualBox Extension Pack, one-liner follows LatestVirtualBoxVersion=$(wget -qO - https://download.virtualbox.org/virtualbox/LATEST-STABLE.TXT) && wget "https://download.virtualbox.org/virtualbox/${LatestVirtualBoxVersion}/Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack" Simplification attribution goes to guntbert . Thank you. You might want to verify its integrity by comparing its SHA-256 checksum available in file: https://www.virtualbox.org/download/hashes/${LatestVirtualBoxVersion}/SHA256SUMS using sha256sum -c --ignore-missing SHA256SUMS Then, we install it as follows: sudo VBoxManage extpack install --replace Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack To verify if it has been successfully installed, we may list the installed extension packs: VBoxManage list extpacks To uninstall the extension pack: sudo VBoxManage extpack uninstall "Oracle VM VirtualBox Extension Pack" | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/289685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
289,743 | I am reading an Ubuntu 14 hardening guide and this is one of the suggestions: It generally seems like a sensible idea to make sure that only users in the sudo group are able to run the su command in order to act as (or become) root: dpkg-statoverride --update --add root sudo 4750/bin/su I looked up the dpkg-statoverride command but I still can’t figure out exactly what the command above is doing? It seems to imply that Ubuntu 14 by default allows anyone to sudo. To test, I created a new user, logged in as that user, tried to sudo and it failed – which is good. So what is the purpose of the suggestion above? | The purpose is to prevent ordinary users from running the su command (su is similar to sudo, the difference being that sudo executes one command, su starts a new session as a new user, which lasts until that user runs exit) The default mode of su is 4755 or rwsr-xr-x, the "s" means that the command is set-UID (which means that it always runs as the user who owns it, rather than the user who happens to execute it. In this case su is owned by root, so it always runs with root privileges) su has its own security measures in place to ensure that the user who executes it has authority to become another user (typically by asking for the other user's password), but it's conceivable that there would be a security vulnerability in su that would allow an attacker to somehow convince it to do something else without authority. By changing the mode to 4750, it prevents ordinary users (users other than root and those in the sudo group) from even reading or executing it in the first place, so an attacker would need to either change the ownership of that file, or change the mode of the file, or change their own effective UID/GID before they could even attempt to exploit this theoretical vulnerability in su. The dpkg-statoverride command is useful in this instance because it directs the package manager to use those ownership/mode values for that file, even if it gets replaced by a newer version (i.e. via apt upgrade). In other words, it makes it more permanent than just a chown and chmod. Here's a general-purpose tactic I recommend for this instance:Whenever I'm tweaking configuration of su/sudo or any authentication component on a Linux/UNIX machine, I'll open up another ssh/putty session to that server and log in as the root user, and just leave that session open in another window. That way if I do screw something up and lock myself out, I've already got a login session where I can fix anything I broke. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/289743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172214/"
]
} |
289,883 | In zsh, how do I bind a keyboard shortcut to a function? In other words, how do I translate: bash: hw(){ echo "hello world"; }bind -x '"\C-h": hw;' to zsh? | It won't take the functions raw. They need to be wrapped in a "widget" by doing zle -N widgetname funcname The two can have the same name: zle -N hw{,} Then it's possible to do: bindkey ^h hw , causing Ctrl+h to run the hw widget which runs the hw function. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
289,895 | I would like to be able to tell if a character special file would block if a character were read from it without actually reading a character from it. Can this be done? | You can do this from bash using a 0 timeout to read . if read -t 0then read datafi To test a file descriptor other than stdin, say 3, use -u 3 . To find how many chars are ready on stdin you can use a small perl script: #!/usr/bin/perlrequire 'sys/ioctl.ph';$size = pack("L", 0);ioctl(*STDIN, FIONREAD(), $size) or die "ioctl fail: $!\n";$size = unpack("L", $size);print "stdin ready: $size\n"; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5197/"
]
} |
289,964 | I want to edit manually my repo file from the command-line, preferably using sed . How can I do that based on the repo-name I want to edit? I want to search for a specific repo-name (example: reponame-2) and based on that change, for example, the option enabled=1 to enabled=0 [repo-name1]name=repo-name1baseurl=http://linktomyrepo.comenabled=1sslverify=0proxy=_none_[repo-name2]name=repo-name2baseurl=http://linktomyrepo.comenabled=1sslverify=0proxy=_none_ | Perl's "paragraph mode", where "lines" are defined by consecutive newlines, is perfect for this: $ perl -00pe 's/enabled=1/enabled=0/ if /\[repo-name1/' file [repo-name1]name=repo-name1baseurl=http://linktomyrepo.comenabled=0sslverify=0proxy=_none_[repo-name2]name=repo-name2baseurl=http://linktomyrepo.comenabled=1sslverify=0proxy=_none_ Or, to edit the original file directly: perl -i -00pe 's/enabled=1/enabled=0/ if /\[repo-name1/' file Alternatively, you could use awk : $ awk -vRS='\n\n' -vORS='\n\n' '/\[repo-name1/{sub(/enabled=1/,"enabled=0")}1;' file [repo-name1]name=repo-name1baseurl=http://linktomyrepo.comenabled=0sslverify=0proxy=_none_[repo-name2]name=repo-name2baseurl=http://linktomyrepo.comenabled=1sslverify=0proxy=_none_ And, if you have a recent version of GNU-awk or any other awk suporting -i , you can do this to edit in place: awk -iinplace -vRS='\n\n' -vORS='\n\n' '/\[repo-name1/{sub(/enabled=1/,"enabled=0")}1;' file Alternatively, to avoid the extra blank lines that the awk above adds to the end of the file, you could do something more complex like: $ awk -F= '/\[repo-name1/{a=1}/^\s*$/{a=0}a==1 && $1=="enabled"{$2=0}1;' file[repo-name1]name=repo-name1baseurl=http://linktomyrepo.comenabled 0sslverify=0proxy=_none_[repo-name2]name=repo-name2baseurl=http://linktomyrepo.comenabled=1sslverify=0proxy=_none_ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/289964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55603/"
]
} |
289,999 | Using command line, I know that I can encrypt a directory with the following command: zip -er Directory.zip /path/to/directory However, this does not encrypt the filenames themselves. If someone runs: unzip Directory.zip and repeatedly enters a wrong password, the unzip command will loop through all of the contained filenames until the correct password is entered. Sample output: unzip Directory.zip Archive: Directory.zip creating: Directory/[Directory.zip] Directory/sensitive-file-name-1 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-1 incorrect password[Directory.zip] Directory/sensitive-file-name-2 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-2 incorrect password[Directory.zip] Directory/sensitive-file-name-3 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-3 incorrect password and so on. Using command line, is there a way to zip a directory with encryption while also encrypting or hiding the filenames themselves? Thank you. | In a zip file, only file contents is encrypted. File metadata, including file names, is not encrypted. That's a limitation of the file format: each entry is compressed separately, and if encrypted, encrypted separately. You can use 7-zip instead. It supports metadata encryption ( -mhe=on with the Linux command line implementation). 7z a -p -mhe=on Directory.7z /path/to/directory There are 7zip implementations for all major operating systems and most minor ones but that might require installing extra software (IIRC Windows can unzip encrypted zip files off the box these days). If requiring 7z for decryption is a problem, you can rely on zip only by first using it to pack the directory in a single file, and then encrypting that file. If you do that, turn off compression of individual files and instruct the outer zip to compress the zip file, you'll get a better compression ratio overall. zip -0 -r Directory.zip /path/to/directoryzip -e -n : encrypted.zip Directory.zip | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/289999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175250/"
]
} |
290,013 | I am currently trying to run gprof2dot on the gmon.out created by using the -pg option while compiling. Now I have already done pip install gprof2dot . How am I supposed to run this on the gmon.out file that was created? Using the instructions given on the Github page( gprof main | gprof2dot.py | dot -Tpng -o output.png ), I get the error: bash: gprof2dot.py: command not found Note : My executable is called main . | In a zip file, only file contents is encrypted. File metadata, including file names, is not encrypted. That's a limitation of the file format: each entry is compressed separately, and if encrypted, encrypted separately. You can use 7-zip instead. It supports metadata encryption ( -mhe=on with the Linux command line implementation). 7z a -p -mhe=on Directory.7z /path/to/directory There are 7zip implementations for all major operating systems and most minor ones but that might require installing extra software (IIRC Windows can unzip encrypted zip files off the box these days). If requiring 7z for decryption is a problem, you can rely on zip only by first using it to pack the directory in a single file, and then encrypting that file. If you do that, turn off compression of individual files and instruct the outer zip to compress the zip file, you'll get a better compression ratio overall. zip -0 -r Directory.zip /path/to/directoryzip -e -n : encrypted.zip Directory.zip | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/290013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166916/"
]
} |
290,116 | I understand what an inode is but what is the exact definition of an orphaned inode? I don't really get what that means. UPDATE There was a time that a server I was managing ran out of inodes but when I would do a df -h it showed me that the server only had 60% space used up. I'm guessing this is because of orphaned inodes. How can all the inodes be used up but there's still "space" left on the server? Can you address this in your answers? | An orphaned inode is an inode which isn't attached to a directory entry in the filesystem, which means it can't be reached. Orphaned inodes can appear for a number of reasons: temporary files which are deleted but kept open (a common practice) occupy inodes on the filesystem; if the system reboots without shutting down properly, these inodes remain and are orphaned filesystem corruption may corrupt a directory without affecting the inodes of the files the directory contains; these inodes are then orphaned fsck creates new directory entries for orphaned inodes in lost+found . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/290116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175366/"
]
} |
290,146 | I'm working with Bash 3, and I'm trying to form a conditional. In C/C++, its dead simple: ((A || B) && C) . In Bash, its turning out not so (I think the Git authors must have contributed this code before they moved onto other endeavors). This does not work. Note that <0 or 1> is not a string literal; it means a 0 or 1 (generally comes from grep -i ). A=<0 or 1>B=<0 or 1>C=<0 or 1>if [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eqe "0" ]; then ... fi It results in: line 322: syntax error near unexpected token `[[' I then tried: A=<0 or 1>B=<0 or 1>C=<0 or 1>if [ ([ "$A" -eq "0" ]) || ([ "$B" -ne "0" ]) ] && [ "$C" -eq "0" ]; then ... fi it results in: line 322: syntax error near unexpected token `[[' Part of the problem is search results are the trivial examples, and not the more complex examples with compound conditionals. How do I perform a simple ((A || B) && C) in Bash? I'm ready to just unroll it and repeat the same commands in multiple blocks: A=<0 or 1>B=<0 or 1>C=<0 or 1>if [ "$A" -eq "0" ] && [ "$C" -eq "0" ]; then ...elif [ "$B" -ne "0" ] && [ "$C" -eq "0" ]; then ... fi | The syntax of bash is not C-like, even if a little part of it is inspired by C. You can't simply try to write C code and expect it to work. The main point of a shell is to run commands. The open-bracket command [ is a command, which performs a single test¹. You can even write it as test (without the final closing bracket). The || and && operators are shell operators, they combine commands , not tests. So when you write [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] that's parsed as [ [ "$A" -eq "0" ] ||[ "$B" -ne "0" ] ] &&[ "$C" -eq "0" ] which is the same as test [ "$A" -eq "0" ||test "$B" -ne "0" ] &&test "$C" -eq "0" Notice the unbalanced brackets? Yeah, that's not good. Your attempt with parentheses has the same problem: spurious brackets. The syntax to group commands together is braces. The way braces are parsed requires a complete command before them, so you'll need to terminate the command inside the braces with a newline or semicolon. if { [ "$A" -eq "0" ] || [ "$B" -ne "0" ]; } && [ "$C" -eq "0" ]; then … There's an alternative way which is to use double brackets. Unlike single brackets, double brackets are special shell syntax. They delimit conditional expressions . Inside double brackets, you can use parentheses and operators like && and || . Since the double brackets are shell syntax, the shell knows that when these operators are inside brackets, they're part of the conditional expression syntax, not part of the ordinary shell command syntax. if [[ ($A -eq 0 || $B -ne 0) && $C -eq 0 ]]; then … If all of your tests are numerical, there's yet another way, which delimit artihmetic expressions . Arithmetic expressions perform integer computations with a very C-like syntax. if (((A == 0 || B != 0) && C == 0)); then … You may find my bash bracket primer useful. [ can be used in plain sh. [[ and (( are specific to bash (and ksh and zsh). ¹ It can also combine multiple tests with boolean operators, but this is cumbersome to use and has subtle pitfalls so I won't explain it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
290,176 | I'm trying to find a way to grep how many messages in a file are older than 14 days and have a value of number of results return Example going with today's date of 20160616 . $grep 'Put Date' filename : Put Date :'20160425'Put Date :'20160501'Put Date :'20160514'Put Date :'20160609'Put Date :'20160610'Put Date :'20160616' The results should see the following are older than 14 days and would return 3 : Put Date :'20160226'Put Date :'20160501'Put Date :'20160514' | The syntax of bash is not C-like, even if a little part of it is inspired by C. You can't simply try to write C code and expect it to work. The main point of a shell is to run commands. The open-bracket command [ is a command, which performs a single test¹. You can even write it as test (without the final closing bracket). The || and && operators are shell operators, they combine commands , not tests. So when you write [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] that's parsed as [ [ "$A" -eq "0" ] ||[ "$B" -ne "0" ] ] &&[ "$C" -eq "0" ] which is the same as test [ "$A" -eq "0" ||test "$B" -ne "0" ] &&test "$C" -eq "0" Notice the unbalanced brackets? Yeah, that's not good. Your attempt with parentheses has the same problem: spurious brackets. The syntax to group commands together is braces. The way braces are parsed requires a complete command before them, so you'll need to terminate the command inside the braces with a newline or semicolon. if { [ "$A" -eq "0" ] || [ "$B" -ne "0" ]; } && [ "$C" -eq "0" ]; then … There's an alternative way which is to use double brackets. Unlike single brackets, double brackets are special shell syntax. They delimit conditional expressions . Inside double brackets, you can use parentheses and operators like && and || . Since the double brackets are shell syntax, the shell knows that when these operators are inside brackets, they're part of the conditional expression syntax, not part of the ordinary shell command syntax. if [[ ($A -eq 0 || $B -ne 0) && $C -eq 0 ]]; then … If all of your tests are numerical, there's yet another way, which delimit artihmetic expressions . Arithmetic expressions perform integer computations with a very C-like syntax. if (((A == 0 || B != 0) && C == 0)); then … You may find my bash bracket primer useful. [ can be used in plain sh. [[ and (( are specific to bash (and ksh and zsh). ¹ It can also combine multiple tests with boolean operators, but this is cumbersome to use and has subtle pitfalls so I won't explain it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175411/"
]
} |
290,179 | On a cluster where I am part of the management team, I often have to go through the multipage standard output of various commands such as sudo find / to look for any troubles such as broken links or to check the directory trees. At other times, I need to review long text files with lists of items on them to see if there are any unusual names. Normally, piping the output through less , I can scroll pagewise but I figure it would be sufficient if the standard output scrolled a little slowly just like the credit roll at the end of a movie. Is there a way to accomplish this in bash or any other terminal environment? | Answer from thrig's comment on OP. Works very well. Change the decimal after sleep to modify the time between lines. sudo find / | awk '{system("sleep .5");print}' Quit with ctrl+z and then kill the job (when using bash); ctrl+c only exits that line. Edit: Did some research based on a comment below. The suggestion awk '{system(sleep.5)||exit;print}' wasn't working on my system, but the following does seem to allow a ctrl+c exit. awk '{if (system("sleep .5 && exit 2") != 2) exit; print}' Putting it in a script or giving it an alias will save you from carpal tunnel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8141/"
]
} |
290,195 | The apache2 package was broken on my debian server, so I started by uninstalling all apache2 related package.Now everything seems to be uninstalled properly. dpkg -l | grep 'apache' doesn't return anything However, I can't seem to be able to install apache2... With apt-get : sudo apt-get install apache2Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-prefork (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-event (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-itk (= 2.2.22-13+deb7u6) but it is not going to be installed Depends: apache2.2-common (= 2.2.22-13+deb7u6) but it is not going to be installedE: Unable to correct problems, you have held broken packages. And with aptitude : sudo apt-get install apache2Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-prefork (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-event (= 2.2.22-13+deb7u6) but it is not going to be installed or apache2-mpm-itk (= 2.2.22-13+deb7u6) but it is not going to be installed Depends: apache2.2-common (= 2.2.22-13+deb7u6) but it is not going to be installedE: Unable to correct problems, you have held broken packages.root@dora:~# sudo aptitude install apache2The following NEW packages will be installed: apache2 apache2-mpm-worker{a} apache2-utils{a} apache2.2-bin{a} apache2.2-common{a} libaprutil1-dbd-sqlite3{ab} libaprutil1-ldap{ab} 0 packages upgraded, 7 newly installed, 0 to remove and 0 not upgraded.Need to get 1 290 kB of archives. After unpacking 5 146 kB will be used.The following packages have unmet dependencies: libaprutil1-dbd-sqlite3 : Depends: libaprutil1 (= 1.4.1-3) but 1.5.4-1+b1 is installed. libaprutil1-ldap : Depends: libaprutil1 (= 1.4.1-3) but 1.5.4-1+b1 is installed.The following actions will resolve these dependencies: Keep the following packages at their current version:1) apache2 [Not Installed] 2) apache2-mpm-worker [Not Installed] 3) apache2.2-bin [Not Installed] 4) apache2.2-common [Not Installed] 5) libaprutil1-dbd-sqlite3 [Not Installed] 6) libaprutil1-ldap [Not Installed] Accept this solution? [Y/n/q/?] YNo packages will be installed, upgraded, or removed.0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B of archives. After unpacking 0 B will be used. How can I solve this issue and get Apache to work again? Edit to answer Martin: cat /etc/apt/sources.listdeb http://debian.mirrors.ovh.net/debian/ wheezy maindeb-src http://debian.mirrors.ovh.net/debian/ wheezy maindeb http://security.debian.org/ wheezy/updates maindeb-src http://security.debian.org/ wheezy/updates maindeb http://packages.dotdeb.org wheezy alldeb-src http://packages.dotdeb.org wheezy alldeb http://packages.dotdeb.org wheezy-php55 alldeb-src http://packages.dotdeb.org wheezy-php55 alldeb http://security.debian.org/ testing/updates maindeb http://ppa.launchpad.net/webupd8team/java/ubuntu precise maindeb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise maindeb http://repo.mysql.com/apt/debian/ wheezy mysql-5.6deb-src http://repo.mysql.com/apt/debian/ wheezy mysql-5.6deb http://dl.google.com/linux/mod-pagespeed/deb/ stable main And finally: apt-cache policy libaprutil1libaprutil1: Installed: 1.5.4-1+b1 Candidate: 1.5.4-1+b1 Version table: *** 1.5.4-1+b1 100 100 /var/lib/dpkg/status 1.4.1-3 500 500 http://debian.mirrors.ovh.net/debian wheezy/main amd64 Packages | Answer from thrig's comment on OP. Works very well. Change the decimal after sleep to modify the time between lines. sudo find / | awk '{system("sleep .5");print}' Quit with ctrl+z and then kill the job (when using bash); ctrl+c only exits that line. Edit: Did some research based on a comment below. The suggestion awk '{system(sleep.5)||exit;print}' wasn't working on my system, but the following does seem to allow a ctrl+c exit. awk '{if (system("sleep .5 && exit 2") != 2) exit; print}' Putting it in a script or giving it an alias will save you from carpal tunnel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147802/"
]
} |
290,223 | I have NGINX configured like this as a reverse proxy for http requests: server { listen 80; server_name 203.0.113.2; proxy_set_header X-Real-IP $remote_addr; # pass on real client IP location / { proxy_pass http://203.0.113.1:3000; }} I also want to proxy ssh (Port 22) requests. Can I add another server block like this to the same configuration file: server { listen 22; server_name 203.0.113.2; proxy_set_header X-Real-IP $remote_addr; # pass on real client IP location / { proxy_pass http://203.0.113.1:22; }} Such that the end result is this: server { listen 80; server_name 203.0.113.2; proxy_set_header X-Real-IP $remote_addr; # pass on real client IP location / { proxy_pass http://203.0.113.1:3000; }}server { listen 22; server_name 203.0.113.2; proxy_set_header X-Real-IP $remote_addr; # pass on real client IP location / { proxy_pass http://203.0.113.1:22; }} | The ssh protocol is not based on HTTP, and, as such, cannot be proxied through the regular proxy_pass of ngx_http_proxy_module However, recently, starting with nginx 1.9.0 (released as stable with 1.10.0 on 2016-04-26), nginx did gain support for doing TCP stream proxying, which means that if you have a recent-enough version of nginx, you can, in fact, proxy ssh connections with it (however, note that you wouldn't be able to add anything like the X-Real-IP to the proxied connection, as this is not based on HTTP). For more information and examples, take a look at: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html https://stackoverflow.com/questions/34741571/nginx-tcp-forwarding-based-on-hostname/34958192#34958192 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/290223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164760/"
]
} |
290,242 | Given a file with multiple lines, I want to change every space to dash. I did like that: #!/bin/bashwhile read line; do echo "${line// /-}"done This works just fine, but I need a better method! | The standard tr utility does exactly this: tr ' ' '-' <filename.old >filename.new You can use a tool like sponge to do in-place editing (hides the fact that a temporary file is being used): tr ' ' '-' <filename | sponge filename | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150030/"
]
} |
290,321 | I can't find a way to pass output of whereis command to cd command in same line so I don't have to do cd in the second step. I have tried passing like below: cd $(whereis node_modules) Or cd "`dirname $(whereis node_modules)`" Also cd "$(whereis node_modules)" But none of the above method works. Can somebody find what should be wrong in above codes ? | You can do that with, cd "`which node_modules`" With dirname to get the directory: cd "$(dirname "$(which node_modules)" )" as you have mentioned in the comment I am expecting to do this in one step & assuming nod_module is a directory, so you can do that with the following command: cd $(whereis node_modules | cut -d ' ' -f2) (Note that the latter command assumes that the Linux whereis is being used, not the BSD one, and that the path does not contain any spaces.) As suggested by @Dani_I, you can have a look at this Why not use "which"? What to use then? , which might be more useful. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128523/"
]
} |
290,346 | At work my laptop is set to the right of my external monitor. Not because I like to but because no other configuration is possible. When I need to extend my desktop I always have to select « Extend to the right » (to enable the external monitor with auto-detected resolution) then switch to « Advanced » and manually move the icon of the laptop display to the right of that of the external monitor and press « Apply ». It gets boring when you have to do that on a daily basis. Besides — and I don't know why — the « Mirror Displays » button is disabled. The display selection window shows itself right after I plug in the HDMI cable, which is great. If the desktop was extended before I unplug the HDMI cable, it snaps back to the laptop panel immediately afterwards and that's great too. Therefore I need none of the « External monitor » and « Laptop » buttons. All in all, out of 4 buttons on that panel only one is half useful, the other ones not at all. So aren't there any... plans to add an « Extend to the left » button in xfce4-display-settings ? Why arbitrary limit extension to right sides? Are left sides... evil? | This is a big old bug - and they don't seem ready to fix it. In fact there is no extending of the internal primary display to the left in Xfce : extending to the left (as indicated in the other answer - also here - or by other methods like drag&drop in GUIs that allow re-arranging) makes the external display the primary one. The very term primary gets a different meaning in Xfce: it's just the LEFT part of ONE desktop/workspace that spans on multiple monitors. Why? It is related to the way Xfce treats workspaces in relation to multiple display monitors. Xfce always shares the same workspace between different displays from left to right. Making a display the primary one on Xfce simply means showing on that display the left part of one workspace (and the right part on the second display). I can only understand the difference by comparison with other desktops that I use in other systems. In the Pantheon desktop (elementary OS) and most other modern desktops extending means that each display has a separate workspace . The primary display has all the controls, while the second display simply takes another workspace, and that is set to the right or left depending on your setting. In Xfce it's not so. With an extended display connected, I see that the second workspace is not present there, it is accessible only if I scroll for that (according to my Xfce desktop setting): the two displays share the same workspace from left to right (including the panel): the primary is always to the left . Changing the workspace when you have two monitors simply shares the second workspace between the two monitors (while the first workspace becomes absent from all monitors), instead of giving each monitor its own separate workspace. That is why with the most common configuration of the panel, the left part (the menu button, the window buttons) is on the primary, the right part (notification area, clock etc) is on the second display if the 'Span monitors' option is enabled . Left (primary) half of the workspace: Right half of the same workspace: If the 'Span monitors' option is not enabled, the panel will be present only on the primary left external monitor. I think that this situation is dependent on the basic configuration of Xfce and cannot be changed without changing that base. Workaround: Extend to the left external monitor (which thus becomes the primary) then move all controls - namely the panel(s) - to the right (secondary) internal monitor, making it act as if it were the primary. To extend to the left use a command like xrandr --output LVDS1 --auto --right-of VGA1 --output VGA1 --auto (where LVDS1 is the internal display). It is very useful to add that to a panel or desktop launcher, or associate it with a shortcut. Un-check the 'Span monitors' option and unlock the panel. Drag & drop the panel onto the right (internal) monitor. (If that doesn't seem to work, make the panel vertical then move it, then make it horizontal if that's what you want.) The panel position seems to be remembered after coming back to this display configuration and after restart. After the command, the windows go on the external left monitor: to have a quick way to send them to next monitor, see this answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
290,423 | Flatpak and snapd packages are available on other distributions because their respective package managers being built for installation on multiple distros [1][2]. Is this also true for the Guix package manager? I remember hearing that Guix packages were (or will be) installable on Debian, but I can't find a reference. [1] http://flatpak.org/index.html#about [2] http://arstechnica.com/information-technology/2016/06/goodbye-apt-and-yum-ubuntus-snap-apps-are-coming-to-distros-everywhere/ | I'm an occasional Guix contributor. Yes, you can run Guix packages on top of other distributions (GuixSD is a standalone distribution of Guix, whereas Guix itself is a package manager, so it can be used under any other distribution). The Binary installation section shows you how to easily set up Guix on top of another GNU/Linux distribution. You can also run Guix without splatting it over your root filesystem; see the " Running Guix Before It Is Installed " section. (There are other tutorials out there; I've even written my own, you can search for it if you so care.) So yes, Guix can be run as a userspace packaging system on top of a more "traditional" distribution. (You do need the daemon running as root and the worker users and etc, but once you have that, different users can installing packages for themselves without clobbering each other.) However, you might notice that maybe it's a bit more work than desirable to get Guix running. It would be much nicer if you could apt-get install guix or install from yum, pacman, etc. That would reduce some steps! Guix could be packaged for other distributions; Diane Trout was working on this for Debian . However, for good reasons (maybe too long to go into here?) Guix does not follow the Filesystem Heirarchy Standard, and for that reason alone will probably not be installed in the main repositories of Debian at least soon. Maybe some day this will change. Hope that helps! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269/"
]
} |
290,449 | I do a change in my current keyboard layout English(US-Dvorak) but it does not react with the following code without changing back-and-forth to another keyboard layout, for xserver reloading # restore your current keyboard settings; sudo apt-get install --reinstall xkb-data# you close also extra Control at Capslock etcgsettings set org.gnome.desktop.input-sources xkb-options "[]"# do any change in `/usr/share/X11/xkb/symbols/us`; etc add about 3rd level config for some [A,a] like [A,a,x]. sudo dpkg-reconfigure xkb-data# TODO Is there any command which can cause reload of xserver regardless you have active your current keyboard where you do the change? I do not want to manually cause the reloading of xserver by doing such a switch. It would be great to do by a one-liner. Systems: Ubuntu 16.04 Linux kernel: 4.6 Keyboard model: pc105 Keyboard layout: English (Dvorak) = US-dvorak Related thread: here about How to Get A with Dots in Dvorak of Ubuntu 16.04? | From here : To apply new [keyboard] settings, restarting the keyboard-setup service should suffice, otherwise you can try to restart kernel input system via udev: udevadm trigger --subsystem-match=input --action=change For completeness, restarting keyboard-setup would look like # For Ubuntu < 16.04service keyboard-setup restart# For Ubuntu >= 16.04systemctl restart keyboard-setup | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290449",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
290,474 | I have a fresh installation of CentOS. I created a user and added it to wheel group (to add it to sudoers list). I logged in with the credentials of new user. I then run the command cat useradd . Ideally, I should get a permission denied response. But the command worked and I got the details. I looked at the permissions with ls -al . Here is the screenshot : It shows the owner is root, but it still showed the details to other user. Can anyone kindly explain what is going on here? Thank you. (I am not that familiar with Linux. I have used it in past but it was a long time ago.) | In the image you show that the "other" group has read permissions; if you tried to append echo testline >> useradd or execute ./useradd it would give you a permission denied. If you're looking to remove read permissions for the 'other' users you can use sudo chmod o-r useradd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175618/"
]
} |
290,478 | I wish to compile the results of a find command where the find command returns the relative filepath of a java file located in subdirectories. E.G., I am in ./ and I want to run javac on a file, someFile.java , but I don't know at "command-type-time" what the relative path is. Running find . -name someFile.java returns the correct relative path (and only the correct one so long as someFile.java is uniquely named within subdirectories of . ) I wish to compile THIS file. So I have attempted javac | find . -name someFile.java but I am not sure about why this is not working. Is this even possible? | In the image you show that the "other" group has read permissions; if you tried to append echo testline >> useradd or execute ./useradd it would give you a permission denied. If you're looking to remove read permissions for the 'other' users you can use sudo chmod o-r useradd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175620/"
]
} |
290,517 | I recently did a clean install of Linux Mint 17.3 with Cinnamon on my machine. Before the clean install, if I right clicked on a file or folder in nemo, the menu would have a 'create shortcut' option. Now after the clean install, that option isn't there. I've gone through the nemo preferences and I can't find any option to enable it. After some searching I found out a keyboard shortcut for making file shortcuts in nemo ( ctrl + shift +click and drag), but I'd much rather the more intuitive (and memorable) right click menu option. Similarly, other right click options that are now missing are copy to other pane home Downloads etc move to other pane home Downloads etc How can I get those options back as well? I've tried searching through the Nemo preferences, but to no avail. | When you right click the file or folder in Nemo, there will be a + sign at the top. Clicking that will expand the menu and give you the options you want. For what it's worth, I found this functionality at this link: https://forums.linuxmint.com/viewtopic.php?t=212256 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114830/"
]
} |
290,525 | I am reading this intro to the command line by Mark Bates. In the first chapter, he mentions that hard links cannot span file systems. An important thing to note about hard links is that they only work on the current file system. You can not create a hard link to a file on a different file system. To do that you need to use symbolic links, Section 1.4.3. I only know of one filesystem. The one starting from root ( / ). This statement that hard links cannot span over file systems doesn't make sense to me. The Wikipedia article on Unix file systems is not helpful either. | Hopefully I can answer this in a way that makes sense for you.A file system in Linux, is generally made up of a partition that is formatted in one of various ways (gotta love choice!) that you store your files on. Be that your system files, or your personal files... they are all stored on a file system. This part you seem to understand. But what if you partition your hard drive to have more than one partition (think Apple Pie cut up into pieces), or add an additional hard drive (perhaps a USB stick?). For the sake of argument, they all have file systems on them as well. When you look at the files on your computer, you're seeing a visual representation of data on your partition's file system. Each file name corresponds to what is called an inode, which is where your data, behind the scenes, really lives. A hard link lets you have multiple "file names" (for lack of a better description) that point to the same inode. This only works if those hard links are on the same file system. A symbolic link instead points to the "file name", which then is linked to the inode holding your data. Forgive my crude artwork but hopefully this explains better. image.jpg image2.jpg \ / [your data] here, image.jpg, and image2.jpg both point directly to your data. They are both hardlinks. However... image.jpg <----------- image2.jpg \ [your data] In this (crude) example, image2.jpg doesn't point to your data, it points to the image.jpg... which is a link to your data. Symbolic links can work across file system boundaries (assuming that file system is attached and mounted, like your usb stick). However a hard link cannot. It knows nothing about what is on your other file system, or where your data there is stored. Hopefully this helps make better sense. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175667/"
]
} |
290,533 | I'm running old Debian machine:Distributor ID: DebianDescription: Debian GNU/Linux 5.0.2 (lenny)Release: 5.0.2Codename: lenny I open terminal and run Midnight Commander in it. Now I need to quit by pressing F10. But When I do this I'm getting terminal menu: How to get MC menu and not terminal one by pressing F10? | Go to Edit->Keyboard Shortcuts And uncheck "Enable the menu shortcut key" to turn it off . Reference link : here . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37277/"
]
} |
290,553 | This is the behavior of pgrep on my FreeBSD script: luis@Balanceador:~/Temporal$ cat test.sh#!/usr/bin/env bashpgrep -fl "test.sh"luis@Balanceador:~/Temporal$ ./test.shluis@Balanceador:~/Temporal$ No output. This is: the script itself is not detected as running. Just like the command-line behavior of pgrep . This is fine for me. And this is the Linux (Ubuntu) case: luis@Terminus:~/Temporal$ cat test.sh#!/usr/bin/env bashpgrep -fl "test.sh"luis@Terminus:~/Temporal$ ./test.sh4514 bashluis@Terminus:~/Temporal$ As can be seen, the test.sh script itself seems to be detected as a running process. I am developing a shell (Bash) script that must be able to stop and wait if another instance of it (same name) is detected, and must work on both Linux and FreeBSD, so I would like to homogeneize the pgrep command for this. Is there any way to make both of them behave the same ? Answers for any derivative like grep or ps accepted. Tested : The -x (exact) switch from the Man Page , but does not work with parameters on the scripts (or I don't understand how to make it work). Parameters are not important to detect (they could be whatever) in this case; I just need to detect the main '.sh' script. Further Notes : Please, don't answer like "Use file locking". I actually want to know about the pgrep tool. | FreeBSD's pgrep excludes ancestors of the pgrep process, including the script instance that ran it. Linux's pgrep doesn't have a corresponding feature. You can exclude the script instance manually. The process id to exclude is in $$ . You do need to be a bit careful to also avoid any subshell: the most straightforward method pgrep … | grep -v "^$$ " might list the right-hand side of the pipe if pgrep happens to reach it in the process list before grep is invoked. instances=$(pgrep -fl "test.sh")if [[ "$instances" =~ (.*)$'\n'"$$ "[^$'\n']*$'\n'(.*) ]]; then instances=${BASH_REMATCH[1]}$'\n'${BASH_REMATCH[2]}elif [[ "$instances" =~ (^|.*$'\n')"$$ "[^$'\n']*($|$'\n'.*) ]]; then instances=${BASH_REMATCH[1]}${BASH_REMATCH[2]}fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
290,578 | Conceptually I have an easy task... I have [loosely] structured data in a file: Testing: debug, default CXXFLAGS<100's of additional output lines>Testing: release, default CXXFLAGS<100's of additional output lines>... I try to summarize it in a log file: echo "Configurations tested:" | tee -a "$TEST_RESULTS"echo $($GREP 'Testing: ' "$TEST_RESULTS" | $SED 's/Testing: / * /g') | tee -a "$TEST_RESULTS" Instead of: Configurations tested: * debug, default CXXFLAGS * release, default CXXFLAGS I get: Configurations tested:1 3way.cpp 3way.h CMakeLists.txt CMakeLists.txt.diff Doxyfile Filelist.txt GNUmakefileGNUmakefile-cross Install.txt License.txt Readme.txt TestData TestVectors adhoc.cpp.protoadler32.cpp adler32.h aes.h algebra.cpp algebra.h ... I think I am wreaking havoc on the file buffer $TEST_RESULTS because its being read from in the grep , and written to with the tee . When I attempt to put the result of $GREP 'Testing: ' "$TEST_RESULTS" | $SED 's/Testing: / * /g' in a shell variable, I loose the line endings which results in one big concatenation: * debug, default CXXFLAGS * release, default CXXFLAGS ... <30 additional configs> How do I read from and append to a file at the same time while preserving the end-of-lines? I've made some progress with: ESCAPED=$($GREP 'Testing: ' "$TEST_RESULTS" | $AWK -F ": " '{print " -" $2 "$"}')echo $ESCAPED | tr $ '\n' | tee -a "$TEST_RESULTS" However, it can't use * as a bullet point, and it seems to drop leading space: Configurations tested:-debug, default CXXFLAGS -release, default CXXFLAGS I'm not using sed because swapping-in a new line is an absolute pain across platforms. Platforms include BSD, Cygwin, Linux, OS X, Solaris. | FreeBSD's pgrep excludes ancestors of the pgrep process, including the script instance that ran it. Linux's pgrep doesn't have a corresponding feature. You can exclude the script instance manually. The process id to exclude is in $$ . You do need to be a bit careful to also avoid any subshell: the most straightforward method pgrep … | grep -v "^$$ " might list the right-hand side of the pipe if pgrep happens to reach it in the process list before grep is invoked. instances=$(pgrep -fl "test.sh")if [[ "$instances" =~ (.*)$'\n'"$$ "[^$'\n']*$'\n'(.*) ]]; then instances=${BASH_REMATCH[1]}$'\n'${BASH_REMATCH[2]}elif [[ "$instances" =~ (^|.*$'\n')"$$ "[^$'\n']*($|$'\n'.*) ]]; then instances=${BASH_REMATCH[1]}${BASH_REMATCH[2]}fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
290,667 | My Network Interface card r/w rate is 1000MB/S, but when I scp one file it shows the copy speed is 120MB/S. such as: scp test.gz localhost:/data/test.gz | Looks like you're confusing M b /s (mega bit per second) with M B /s (mega byte per second). 1000 M b /s becomes a theoretical 125 M B /s, and 120 M B /s looks like good performance (since you don't give more information, I take that it is a standard desktop PC with SATA hard disks). Besides, I don't really think you can reach 1 G B /s (which would mean 8 G b /s) without special equipment (10 Gb ethernet, a high-end NAS or a SAN etc...). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166355/"
]
} |
290,696 | I'm looking for a way to visually separate stdout and stderr, so that they don't interleave and so that they can be easily identified. Ideally, stdout and stderr would have separate areas on the screen in which they are displayed, e.g. in different columns. For example, output which would have looked like this: ~$ some commandsome useful output infoERROR: an errormore outputERROR: has occurredanother message~$ would instead look something like this: ~$ some command |some useful output info |more output | ERROR: an erroranother message | ERROR: has occurred~$ | | You could use GNU screen 's vertical split feature: #! /bin/bash -tmpdir=$(mktemp -d) || exittrap 'rm -rf "$tmpdir"' EXIT INT TERM HUPFIFO=$tmpdir/FIFOmkfifo "$FIFO" || exitconf=$tmpdir/confcat > "$conf" << 'EOF' || exitsplit -vfocusscreen -t stderr sh -c 'tty > "$FIFO"; read done < "$FIFO"'focusscreen -t stdout sh -c 'read tty < "$FIFO"; eval "$CMD" 2> "$tty"; echo "[Command exited with status $?, press enter to exit]"; read prompt; echo done > "$FIFO"'EOFCMD="$*"export FIFO CMDscreen -mc "$conf" To use for instance as: that-script 'ls / /not-here' The idea is that it runs screen with a temporary conf file that starts two screen windows in a vertical split layout. In the first one, we run your command with the stderr connected to the second one. We use a named pipe for the second window to communicate its tty device to the first one, and also for the first one to tell the second one when the command is done. The other advantage compared to pipe-based approaches is that the command's stdout and stderr are still connected to tty devices, so it doesn't affect the buffering. Both panes can also been scrolled up and down independently (using screen 's copy mode). If you run a shell like bash interactively with that script, you'll notice the prompt will be displayed on the second window, while the shell will read what you type in the first window as those shells output their prompt on stderr. In the case of bash , the echo of what you type will also appear on the second window as that echo is output by the shell (readline in the case of bash ) on stderr as well. With some other shells like ksh93 , it will show on the first window ( echo output by the terminal device driver, not the shell), unless you put the shell in emacs or vi mode with set -o emacs or set -o vi . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170239/"
]
} |
290,710 | I'm writing a bash script that kills a bunch of processes selected by certain criteria and then quits. The only problem is that those criteria apply to the script and its parent processes ( bash , sshd ) itself, so in order to avoid killing the script before it has done its work, I first get the matching processes with ps , then filter out the script and its parents with sed and finally kill the remaining processes with kill . Now I'm wondering whether I could simplify this to a single pkill call, but obviously that can only work if pkill is guaranteed to kill itself and its parent processes last if they occur in the list of processes to kill. Is there such a guarantee implemented into pkill ? | pkill never kills itself, just like pgrep nerver lists itself; pkill does exit after killing each process matching the criteria, except itself. pkill does kill its parent(s) if it(they) match the criteria, but if the parent is a shell unless you use an unignorable signal (usually only -9 aka -[SIG]KILL) the shell normally ignores it. If it includes your sshd that will indeed kill your session, and with it your shell and (most?) other processes, which is usually undesired. You might want to use pgrep to find the processses, perhaps with -l or -lf , and do additional checks before kill ing them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68864/"
]
} |
290,716 | In Persian numerals, ۰۱۲۳۴۵۶۷۸۹ is equivalent to 0123456789 in European digits. How can I convert Persian number ( in UTF-8 ) to ASCII? For example, I want ۲۱ to become 21 . | Since it's a fixed set of numbers, you can do it by hand: $ echo ۲۱ | LC_ALL=en_US.UTF-8 sed -e 'y/۰۱۲۳۴۵۶۷۸۹/0123456789/'21 (or using tr , but not GNU tr yet) Setting your locale to en_US.utf8 (or better to the locale which characters set belongs to) is required for sed to recognize your characters set. With perl : $ echo "۲۱" | perl -CS -MUnicode::UCD=num -MUnicode::Normalize -lne 'print num(NFKD($_))'21 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/290716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66727/"
]
} |
290,744 | I have images, I need to delete some files with the same size. But no need to remove all such images, but only the next in the queue (in alphabetical order): 1.png # 23,5 Kb2.png # 24,6 Kb4.png # 24,6 Kb > remove8.png # 24,6 Kb > remove16.png # 23,5 Kb | If you're on Linux or otherwise have access to GNU tools, you can do this: last=-1; find . -type f -name '*.png' -printf '%f\0' | sort -nz | while read -d '' i; do s=$(stat -c '%s' "$i"); [[ $s = $last ]] && rm "$i"; last=$s; done Explanation last=-1 : set the variable $last to -1 . find . -type f -name '*.png' -printf '%f\0' : find all files in the current directory whose name ends in .png and print their name followed by the NULL character . sort -gz : sort \0 -separated input ( -z ) numerically ( -n ). This results in a sorted list of file names. while read -d '' i; do : read the list of file names. The -d '' sets the field delimiter to \0 which is needed to process NULL-separated data correctly. s=$(stat -c '%s' "$i"); : the variable $s now holds the size of the current file ( $i ). [[ $s = $last ]] && rm "$i"; : if the current file's size is the same as the last file's size, delete the file. last=$s : set $last to the current file's size. Now, if the next file has the same size, the previous step will delete it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152089/"
]
} |
290,780 | I have seven (or eight and so on) files with same number of lines. file1 1.0011.0021.0031.004 file2 2.0012.0022.0032.004 file3 3.0013.0023.0033.004 etc. Desired output: 1.001;2.001;3.001;4.001;5.001;6.001;7.0011.002;2.002;3.002;4.002;5.002;6.002;7.0021.003;2.003;3.003;4.003;5.003;6.003;7.0031.004;2.004;3.004;4.004;5.004;6.004;7.004 How to do it with short script in awk? | As steeldriver said, the reasonable way to do this is with paste : $ paste -d';' file*1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.0011.002;2.002;3.002;4.002;5.002;6.002;7.002;8.0021.003;2.003;3.003;4.003;5.003;6.003;7.003;8.0031.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 But, if you must use awk : $ awk '{a[FNR]=a[FNR](FNR==NR?"":";")$0} END{for (i=1;i<=FNR;i++) print a[i]}' file*1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.0011.002;2.002;3.002;4.002;5.002;6.002;7.002;8.0021.003;2.003;3.003;4.003;5.003;6.003;7.003;8.0031.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 The awk script keeps all the data in memory. If the files are large, this could be a problem. But, for this task, paste is better and simpler anyway. How it works In this script a is an array with a[i] being the output for line i . As we read through each of the subsequent files, we append the new information for line i to the end of a[i] . After we have finished reading the files, we print out the values in a . In more detail: a[FNR]=a[FNR](FNR==NR?"":";")$0 FNR is the line number of the current file we are reading and $0 are the contents of that line. This code adds $0 on to the end of a[FNR] . Except if we are still reading the first file, we put in a semicolon before $0 . This is done using the complex looking ternary statement: (FNR==NR?"":";") . This is really just a if-then-else command. If we are reading the first file, that is if FNR==NR , then it returns an empty string "" . If not, it returns a semicolon, ; . END{for (i=1;i<=FNR;i++) print a[i]} After we have finished reading all the files, this prints out the data that we have accumulated in array a . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151133/"
]
} |
290,783 | I'm doing a restore backups script, and when I try to join files which are splitted, cat prints that error: cat: fullbackup_mrbsNuevo_15_6_2016.tar.gz.*: No such file or directory The line that runs cat command is this one: cat $type"_"$NAME_DIR"_"$d"_"$2"_"$1".tar.gz."\* > $type"_"$NAME_DIR"_"$d"_"$2"_"$1".tar.gz" I have checked if the files exist and they are in the directory. Also I have tried running the command in the shell and works fine. I don't know what is bad in my script. When I run the ls command in the directory the files are: cisco@Paquito1:/tmp/backup$ ls -lahtotal 7,1Mdrwxr-xr-x 2 cisco cisco 4,0K 2016-06-18 12:01 .drwxrwxrwt 5 root root 4,0K 2016-06-18 10:10 ..-rw-r--r-- 1 cisco cisco 5,0M 2016-06-18 11:52 fullbackup_mrbsNuevo_15_6_2016.tar.gz.aa-rw-r--r-- 1 cisco cisco 2,1M 2016-06-18 11:52 fullbackup_mrbsNuevo_15_6_2016.tar.gz.ab And before executing the cat command I move to the /tmp/backup directory. Thanks in advance. | As steeldriver said, the reasonable way to do this is with paste : $ paste -d';' file*1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.0011.002;2.002;3.002;4.002;5.002;6.002;7.002;8.0021.003;2.003;3.003;4.003;5.003;6.003;7.003;8.0031.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 But, if you must use awk : $ awk '{a[FNR]=a[FNR](FNR==NR?"":";")$0} END{for (i=1;i<=FNR;i++) print a[i]}' file*1.001;2.001;3.001;4.001;5.001;6.001;7.001;8.0011.002;2.002;3.002;4.002;5.002;6.002;7.002;8.0021.003;2.003;3.003;4.003;5.003;6.003;7.003;8.0031.004;2.004;3.004;4.004;5.004;6.004;7.004;8.004 The awk script keeps all the data in memory. If the files are large, this could be a problem. But, for this task, paste is better and simpler anyway. How it works In this script a is an array with a[i] being the output for line i . As we read through each of the subsequent files, we append the new information for line i to the end of a[i] . After we have finished reading the files, we print out the values in a . In more detail: a[FNR]=a[FNR](FNR==NR?"":";")$0 FNR is the line number of the current file we are reading and $0 are the contents of that line. This code adds $0 on to the end of a[FNR] . Except if we are still reading the first file, we put in a semicolon before $0 . This is done using the complex looking ternary statement: (FNR==NR?"":";") . This is really just a if-then-else command. If we are reading the first file, that is if FNR==NR , then it returns an empty string "" . If not, it returns a semicolon, ; . END{for (i=1;i<=FNR;i++) print a[i]} After we have finished reading all the files, this prints out the data that we have accumulated in array a . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175824/"
]
} |
290,891 | Is there a simple way to make this silently do nothing, if /my-directory does not exist? find /my-directory -type f -mtime +14 -print0 | xargs -r0 rm Versions: find: GNU findutils 4.5.10 bash 4.2.53 | You can throw away error reporting from find with 2>/dev/null , or you can avoid running the command at all: test -d /my-directory && find /my-directory -type f -mtime +14 -print0 | xargs -r0 rm As a slight optimisation and clearer code, some versions of find - including yours - can perform the rm for you directly: test -d /my/directory && find /my-directory -type f -mtime +14 -delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22068/"
]
} |
290,938 | So I'm trying to give a VM a static IP address, this case has been particularly stubborn. The VM is running on a ESXi cluster with its own public IP range. I had it (sorta) working with an IPv4 address, except it would be reassigned every boot, now after fiddling with nmcli I can't get any IPv4 address assigned to it. The interface is ens32 and I've changed ipv4.addresses to XXX.XXX.120.44/24 (want it to have address 120.44 ), gateway to XXX.XXX.120.1 and set it to manual. Does anyone have any insights to why this isn't working? all the online guides are for the older network service not NetworkManager. | Try: # nmcli con add con-name "static-ens32" ifname ens32 type ethernet ip4 xxx.xxx.120.44/24 gw4 xxx.xxx.120.1# nmcli con mod "static-ens32" ipv4.dns "xxx.xxx.120.1,8.8.8.8"# nmcli con up "static-ens32" iface ens32 Next, find the other connections and delete them. For example: # nmcli con showNAME UUID TYPE DEVICEens32 ff9804db5-........ 802-3-ethernet --static-ens32 a4b59cb4a-........ 802-3-ethernet ens32# nmcli con del ens32 On the next reboot, you should pick up the static-ens32 connection, as it is the only one available. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/290938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175924/"
]
} |
290,960 | I have a command that needs to take multiple input files, which are originally from one directory and have certain file name patterns. For example: In directory /home/mydir/ I have files: A.datB.datC.datreadme.doc I would like to learn how to pass all files with ending of ".dat" to this command, which then should look like: command A.dat B.dat C.dat > /home/outputdir/output.dat I could do it in python e.g. by storing the file names in a list, but how should I do it in shell please? Thanks a lot. | A standard UNIX shell will do something called globbing ; this uses special characters to mean, for example, one character ( ? ) or any number of characters ( * ). To use your example, you could run (where the initial $ represents your command prompt and not something you'd type): $ command /home/mydir/*.dat > /home/outputdir/output.dat Your shell will expand that to: $ command /home/mydir/A.dat /home/mydir/B.dat /home/mydir/C.dat > /home/outputdir/output.dat before actually calling the command. The * says "take any and all filenames in /home/mydir that end with ".dat". For some variations of the command, given the same input files: # all of the sample input files have a single letter before the ".dat"$ command /home/mydir/?.dat > /home/outputdir/output.dat# the square brackets say "any (one) of these characters"$ command /home/mydir/[ABC].dat > /home/outputdir/output.dat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/290960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174521/"
]
} |
290,974 | I am trying to extract numbers out of some text. Currently I am using the following: echo "2.5 test. test -50.8" | tr '\n' ' ' | sed -e 's/[^0-9.]/ /g' -e 's/^ *//g' -e 's/ *$//g' | tr -s ' ' This would give me 2.5, "." and 50.8. How should I modify the first sed so it would detect float numbers, both positive and negative? | grep works well for this: $ echo "2.5 test. test -50.8" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?'2.5-50.8 How it works -E Use extended regex. -o Return only the matches, not the context [+-]?[0-9]+([.][0-9]+)?+ Match numbers which are identified as: [+-]? An optional leading sign [0-9]+ One or more numbers ([.][0-9]+)? An optional period followed by one or more numbers. Getting the output on one line $ echo "2.5 test. test -50.8" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?' | tr '\n' ' '; echo ""2.5 -50.8 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/290974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175611/"
]
} |
290,987 | I've a master bind9 DNS server and 2 slave servers running on IPv4 (Debian Jessie), using /etc/bind/named.conf : listen-on-v6 { none; }; When I try to connect from different server(s) each connection takes at least 5 seconds (I'm using Joseph's timing info for debugging): $ curl -w "@curl-format.txt" -o /dev/null -s https://example.com time_namelookup: 5.512 time_connect: 5.512 time_appconnect: 5.529 time_pretransfer: 5.529 time_redirect: 0.000 time_starttransfer: 5.531 ---------- time_total: 5.531 According to curl , lookup takes most of the time, however standard nslookup is very fast: $ time nslookup example.com > /dev/null 2>&1real 0m0.018suser 0m0.016ssys 0m0.000s After forcing curl to use IPv4, it gets much better: $ curl -4 -w "@curl-format.txt" -o /dev/null -s https://example.com time_namelookup: 0.004 time_connect: 0.005 time_appconnect: 0.020 time_pretransfer: 0.020 time_redirect: 0.000 time_starttransfer: 0.022 ---------- time_total: 0.022 I've disabled IPv6 on the host: echo 1 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6 though the problem persists. I've tried running strace to see what's the reason of timeouts: write(2, "*", 1*) = 1write(2, " ", 1 ) = 1write(2, "Hostname was NOT found in DNS ca"..., 36Hostname was NOT found in DNS cache) = 36socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4close(4) = 0mmap(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f220bcf8000mprotect(0x7f220bcf8000, 4096, PROT_NONE) = 0clone(child_stack=0x7f220c4f7fb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f220c4f89d0, tls=0x7f220c4f8700, child_tidptr=0x7f220c4f89d0) = 2004rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0poll(0, 0, 4) = 0 (Timeout)rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0poll(0, 0, 8) = 0 (Timeout)rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0poll(0, 0, 16) = 0 (Timeout)rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0poll(0, 0, 32) = 0 (Timeout)rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0poll(0, 0, 64) = 0 (Timeout) It doesn't seem to be a firewall issues as nslookup (or curl -4 ) is using the same DNS servers. Any idea what could be wrong? Here's tcpdump from the host tcpdump -vvv -s 0 -l -n port 53 : tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes20:14:52.542526 IP (tos 0x0, ttl 64, id 35839, offset 0, flags [DF], proto UDP (17), length 63) 192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x96c7!] 39535+ A? example.com. (35)20:14:52.542540 IP (tos 0x0, ttl 64, id 35840, offset 0, flags [DF], proto UDP (17), length 63) 192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x6289!] 45997+ AAAA? example.com. (35)20:14:52.543281 IP (tos 0x0, ttl 61, id 63674, offset 0, flags [none], proto UDP (17), length 158) 192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 45997* q: AAAA? example.com. 1/1/0 example.com. [1h] CNAME s01.example.com. ns: example.com. [10m] SOA ns01.example.com. ns51.domaincontrol.com. 2016062008 28800 7200 1209600 600 (130)20:14:57.547439 IP (tos 0x0, ttl 64, id 36868, offset 0, flags [DF], proto UDP (17), length 63) 192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x96c7!] 39535+ A? example.com. (35)20:14:57.548188 IP (tos 0x0, ttl 61, id 64567, offset 0, flags [none], proto UDP (17), length 184) 192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 39535* q: A? example.com. 2/2/2 example.com. [1h] CNAME s01.example.com., s01.example.com. [1h] A 136.243.154.168 ns: example.com. [30m] NS ns01.example.com., example.com. [30m] NS ns02.example.com. ar: ns01.example.com. [1h] A 136.243.154.168, ns02.example.com. [1h] A 192.168.1.2 (156)20:14:57.548250 IP (tos 0x0, ttl 64, id 36869, offset 0, flags [DF], proto UDP (17), length 63) 192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x6289!] 45997+ AAAA? example.com. (35)20:14:57.548934 IP (tos 0x0, ttl 61, id 64568, offset 0, flags [none], proto UDP (17), length 158) 192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 45997* q: AAAA? example.com. 1/1/0 example.com. [1h] CNAME s01.example.com. ns: example.com. [10m] SOA ns01.example.com. ns51.domaincontrol.com. 2016062008 28800 7200 1209600 600 (130) EDIT: In bind logs frequently appears this message: error sending response: host unreachable Though, each query is eventually answered (it just takes 5s). All machines are physical servers (it's not fault of NAT), it's more likely that packets are being blocked by a router. Here's quite likely related question: DNS lookups sometimes take 5 seconds . | Short answer: A workaround is forcing glibc to reuse a socket for look up of the AAAA and A records, by adding a line to /etc/resolv.conf : options single-request-reopen The real cause of this issue might be: malconfigured firewall or a router (e.g. a Juniper firewall configuration described here ) which causes dropping AAAA DNS packets bug in DNS server Long answer: Programs like curl or wget use glibc's function getaddrinfo() , which tries to be compatible with both IPv4 and IPv6 by looking up both DNS records in parallel. It doesn't return result until both records are received (there are several issues related to such behaviour ) - this explains the strace above. When IPv4 is forced, like curl -4 internally gethostbyname() which queries for A record only. From tcpdump we can see that: -> A? two requests are send at the beginning -> AAAA? (requesting IPv6 address) <- AAAA reply -> A? requesting again IPv4 address <- A got reply -> AAAA? requesting IPv6 again <- AAAA reply One A reply gets dropped for some reason, that's this error message: error sending response: host unreachable Yet it's unclear to me why there's a need for second AAAA query. To verify that you're having the same issue you can update timeout in /etc/resolv.conf : options timeout:3 First create a text file with custom time reporting config : cat >./curl-format.txt <<-EOF time_namelookup: %{time_namelookup}\n time_connect: %{time_connect}\n time_appconnect: %{time_appconnect}\n time_redirect: %{time_redirect}\n time_pretransfer: %{time_pretransfer}\ntime_starttransfer: %{time_starttransfer}\n ----------\ntime_total: %{time_total}\nEOF then send a request: $ curl -w "@curl-format.txt" -o /dev/null -s https://example.com time_namelookup: 3.511 time_connect: 3.511 time_appconnect: 3.528 time_pretransfer: 3.528 time_redirect: 0.000 time_starttransfer: 3.531 ---------- time_total: 3.531 There are two other related options in man resolv.conf : single-request (since glibc 2.10) sets RES_SNGLKUP in _res.options . By default, glibc performs IPv4 and IPv6 lookups in parallel since version 2.9. Some appliance DNS servers cannot handle these queries properly and make the requests time out. This option disables thebehavior and makes glibc perform the IPv6 and IPv4 requests sequentially (at the cost of some slowdown of the resolvingprocess). single-request-reopen (since glibc 2.9) The resolver uses the same socket for the A and AAAA requests. Some hardware mistakenly sends back only onereply. Whenthat happens the client system will sit and wait for the second reply. Turning this option on changes this behavior sothatif two requests from the same port are not handled correctly it will close the socket and open a new one beforesending thesecond request. Related issues: DNS lookups sometimes take 5 seconds Delay associated with AAAA request | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/290987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34514/"
]
} |
291,062 | The SQLite documentation for its command line client indicates that its possible to filter output of SQLite queries through UNIX utilities: The default output mode is "list". [...] List mode is especially useful when you are going to send the output of a query to another program (such as AWK) for additional processing. example of output produced at an SQLite command prompt sqlite> select * from todos;1|finish reading getting started section of the vim manual2|finish app feature then, if I try adding a pipe to the command, I just get a new prompt sqlite> select * from todos; | grep vim...> Is it possible to send output to a unix utility using a pipe from the SQLite command line, or is it only possible to filter SQLite output if you are actually writing a C application and using the SQLite C library? | You can't pipe output from an interactive SQLite session because it's not a shell. | doesn't do in SQL what it does on a command line. What you probably need to do is something akin to sqlite3 /path/to/mydata.sqlite "select * from todos" | grep vim , which will execute the SQL, and grep the output as you appear to be trying to do. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/291062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
291,065 | I try to duplicate a video file x times from the command line by using a for loop, I've tried it like this, but it does not work: for i in {1..100}; do cp test.ogg echo "test$1.ogg"; done | Your shell code has two issues: The echo should not be there. The variable $i ("dollar i") is mistyped as $1 ("dollar one") in the destination file name. To make a copy of a file in the same directory as the file itself, use cp thefile thecopy If you use more than two arguments, e.g. cp thefile theotherthing thecopy then it is assumed that you'd like to copy thefile and theotherthing into the directory called thecopy . In your case with cp test.ogg echo "test$1.ogg" , it specifically looks for a file called test.ogg and one named echo to copy to the directory test$1.ogg . The $1 will most likely expand to an empty string. This is why, when you delete the echo from the command, you get "test.ogg and test.ogg are the same files"; the command being executed is essentially cp test.ogg test.ogg This is probably a mistyping. In the end, you want something like this: for i in {1..100}; do cp test.ogg "test$i.ogg"; done Or, as an alternative i=0while (( i++ < 100 )); do cp test.ogg "test$i.ogg"done Or, using tee : tee test{1..100}.ogg <test.ogg >/dev/null Note: This would most likely work for 100 copies, but for thousands of copies it may generate a "argument list too long" error. In that case, revert to using a loop. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/291065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
291,225 | I want to know how many instances of a pattern are found by grep while looking recursively through a directory structure. It seems I should be able to pipe the output of grep through something which would count the lines. | I was able to put the answer together with help from this question . The program "wc" program counts newlines, words and byte counts. The "-l" option specifies that the number of lines is desired. For my application, the following worked nicely to count the number of instances of "somePattern": $grep -r "somePattern" filename | wc -l | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/291225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8884/"
]
} |
291,260 | I recently started using Ranger as my default file manager, and I'm really enjoying it. Right now, I've managed to change rifle.conf so that when I play audio or video from Ranger, mpv opens in a new xterm window and the media starts to play. However, if possible, I would like Ranger to open the gnome-terminal instead of xterm. In /.config/ranger/rifle.conf , it says that using the t flag will run the program in a new terminal: If $TERMCMD is not defined, rifle will attempt to extract it from $TERM I tried setting $TERMCMD in both my .profile and .bashrc files, but even though echo $TERMCMD would print "gnome-terminal", Ranger would still open xterm. I also messed with setting $TERM to "gnome-terminal", but that was messy and I decided to leave it alone. Any suggestions? Thanks! | As of 2017, the source-code ( runner.py ) did this: term = os.environ.get('TERMCMD', os.environ.get('TERM')) if term not in get_executables(): term = 'x-terminal-emulator' if term not in get_executables(): term = 'xterm' if isinstance(action, str): action = term + ' -e ' + action else: action = [term, '-e'] + action so you should be able to put any xterm-compatible program name in TERMCMD . However, note the use of -e (gnome-terminal doesn't match xterm's behavior). If you are using Debian/Ubuntu/etc, the Debian packagers have attempted to provide a wrapper to hide this difference in the x-terminal-emulator feature. If that applies to you, you could set TERMCMD to x-terminal-emulator . Followup - while the design of the TERMCMD feature has not changed appreciably since mid-2016, the location within the source has changed: Refactor and improve the TERMCMD handling moved it to ranger/ext/get_executables.py That is implemented in get_term : def get_term(): """Get the user terminal executable name. Either $TERMCMD, $TERM, "x-terminal-emulator" or "xterm", in this order. """ command = environ.get('TERMCMD', environ.get('TERM')) if shlex.split(command)[0] not in get_executables(): command = 'x-terminal-emulator' if command not in get_executables(): command = 'xterm' return command which uses x-terminal-emulator as before. There is a related use of TERMCMD in rifle.py , used for executing commands rather than (as asked in the question) for opening a terminal. Either way, the key to using ranger is x-terminal-emulator , since GNOME Terminal's developers do not document their command-line interface, while Debian developers have provided this workaround. Quoting from Bug 701691 – -e accepts only one term; all other terminal emulators accept more than one term (which the developer refused to fix, marking it "not a bug"): Christian Persch 2013-06-06 16:02:54 UTC There are no docs for the gnome-terminal command line options. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/291260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169049/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.