source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
279,828 | I'm trying to run vmware on my kali linux 2.0 box, because I want a lab to practice my skills on, but vmware won't launch it's giving my an error message saying "C header files matching your running kernel were not found", I ran a command to fix it but then the command spit out some error message root@kali:~# sudo apt-get install -y linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-headers-4.3.0-kali1-amd64 E: Couldn't find any package by glob 'linux-headers-4.3.0-kali1-amd64' E: Couldn't find any package by regex 'linux-headers-4.3.0-kali1-amd64' root@kali:~# | pepperflashplugin-nonfree has its own key stash in /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt . Until the package is updated with the new key, you can add the key locally by executing gpg --keyserver pgp.mit.edu --recv-keys 1397BC53640DB551gpg --export --armor 1397BC53640DB551 | sudo sh -c 'cat >> /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt' It is important that the new key is appended to the file (">>"), the old key is still needed. After this you can install the pepperflashplugin with sudo update-pepperflashplugin-nonfree --install The file will be overwritten when the package is updated, so you might have to do this again after an update if the maintainer hasn't added the new key (in this case you will get the same error message again when the new version is being installed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168075/"
]
} |
279,845 | Linux distro: SLES 11 SP4 I have on hand SLES 11 ISO's: SP1, SP2, & SP3 I need to locate and install older programs located on older ISO's. For example: I have gcc 4.7 on SP4, but I need to install gcc 4.3.4 on SP1. When I insert the disc(s) into the DVD-ROM, I open YaST > Software Management > Search: gcc > The only thing I see is the SP4 catologue list, but I'm fairly certain that the gcc versions are located on those ISO's. What am I doing wrong and how can I install older programs from previous SP packages? | pepperflashplugin-nonfree has its own key stash in /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt . Until the package is updated with the new key, you can add the key locally by executing gpg --keyserver pgp.mit.edu --recv-keys 1397BC53640DB551gpg --export --armor 1397BC53640DB551 | sudo sh -c 'cat >> /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt' It is important that the new key is appended to the file (">>"), the old key is still needed. After this you can install the pepperflashplugin with sudo update-pepperflashplugin-nonfree --install The file will be overwritten when the package is updated, so you might have to do this again after an update if the maintainer hasn't added the new key (in this case you will get the same error message again when the new version is being installed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138367/"
]
} |
279,864 | I use an old server without dos2unix and I would like to convert files containing Windows-style end-of-line (EOL) to Unix-style EOL. I am unfortunately not the admin so I can't install dos2unix . The tr method seems to be the only one that works. cp script _p4 && tr -d '\r' < _p4 > script && rm _p4 Are there any easier methods to do this? | If you have GNU sed you can do this: sed -i 's/\x0D$//' script Where "x0D" is the ASCII code for \r. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87257/"
]
} |
279,884 | Subtitle files come in a variety of formats, from .srt to .sub to .ass and so on and so forth. Is there a way to tell mpv to search for subtitle files alongwith the media files and if it does to start playing the file automatically. Currently I have to do something like this which can be pretty long depending on the filename - [$] mpv --list-options | grep sub-file (null) requires an argument --sub-file String list (default: ) [file] Look forward to answers. Update 1 - A typical movie which has .srt (or subscript) [$] mpv Winter.Sleep.\(Kis.Uykusu\).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (null) requires an argumentPlaying: Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (+) Video --vid=1 (*) (hevc) (+) Audio --aid=1 (aac) (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external)[vo/opengl] Could not create EGL context![sub] Using subtitle charset: UTF-8-BROKENAO: [alsa] 48000Hz stereo 2ch floatVO: [opengl] 1280x536 yuv420pAV: 00:02:14 / 03:16:45 (1%) A-V: 0.000 The most interesting line is this :- (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external) Now if the file was as .ass or .sub with the same filename, it wouldn't work. I have tried it in many media files which have those extensions and each time mpv loads the video and audio and the protocols but not the external subtitle files. Update 2 - The .ass script part is listed as a bug at mpv's bts - https://github.com/mpv-player/mpv/issues/2846 Update 3 - Have been trying to debug with help of upstream, filed https://github.com/mpv-player/mpv/issues/3091 for that. It seems though that it's not mpv which is responsible but ffmpeg (and libavformat) which is supposed to decode the subtitles. Hence have added ffmpeg to it too. | As seen in man mpv : --sub-auto=<no|exact|fuzzy|all>, --no-sub-auto Load additional subtitle files matching the video filename. The parameter specifies how external subtitle files are matched. exact is enabled by default. no Don't automatically load external subtitle files. exact Load the media filename with subtitle file extension (default). fuzzy Load all subs containing media filename. all Load all subs in the current and --sub-paths directories. exact would seem like the appropriate choice, but since it's the default and it doesn't load files like [video name minus extension].srt , fuzzy is the next best bet and it works on my system. So just echo "sub-auto=fuzzy" >> ~/.config/mpv/mpv.conf . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/279884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
279,895 | The -depth primary to find causes it to perform a depth-first search. However, the default sequence is not a breadth-first search. The default sequence could be informally described as a "depth-first traversal that handles nodes when they are first encountered rather than doing so during backtracking." I have an actual need for breadth first search. How can I make find behave in this way? For illustration, with the following setup: $ mkdir -p alpha/{bravo,charlie,delta}$ touch alpha/charlie/{alpha,beta,gamma,phi} find has the following default behavior: $ find alphaalphaalpha/charliealpha/charlie/alphaalpha/charlie/phialpha/charlie/betaalpha/charlie/gammaalpha/deltaalpha/bravo and with -depth , it performs as follows: $ find alpha -depthalpha/charlie/alphaalpha/charlie/phialpha/charlie/betaalpha/charlie/gammaalpha/charliealpha/deltaalpha/bravoalpha However, what I want is the following (fictitious) option: $ find alpha -bfsalphaalpha/charliealpha/deltaalpha/bravoalpha/charlie/alphaalpha/charlie/phialpha/charlie/betaalpha/charlie/gamma In other words I need find to process/report on all files/dirs at a given depth before proceeding further. How can I do this? | # cat ./bfind #!/bin/bashi=0while results=$(find "$@" -mindepth $i -maxdepth $i) && [[ -n $results ]]; do echo "$results" ((i++))done This works by increasing the depth of find and repeating, I think it may repeat results, but could be filtered easily | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
279,900 | I have a rm aliased: alias rm='rm -i' How can I override the -i option when I want to remove a large number of files and I don't want to confirm each deletion? | # cat ./bfind #!/bin/bashi=0while results=$(find "$@" -mindepth $i -maxdepth $i) && [[ -n $results ]]; do echo "$results" ((i++))done This works by increasing the depth of find and repeating, I think it may repeat results, but could be filtered easily | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154468/"
]
} |
279,952 | Ok so I have gotten fairly far into the configuration successfully, the two nodes have authenticated each other and all is well, but when I try to add the virtual_ip it never starts. What i've used so far is not really relevant, but my write up (wip) is here, i just don't want to make this post look larger then it needs to be. To create the virtual interface I have the used the following : pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.218 cidr_netmask=32 op monitor interval=30s I only have one nic and its config looks like this : [root@node1 network-scripts]# cat ifcfg-eno16777984TYPE="Ethernet"BOOTPROTO=noneDEFROUTE="yes"IPV4_FAILURE_FATAL="no"IPV6INIT=noIPV6_AUTOCONF="yes"IPV6_DEFROUTE="yes"IPV6_PEERDNS="yes"IPV6_PEERROUTES="yes"IPV6_FAILURE_FATAL="no"NAME=eth0UUID="bf0b3de8-f607-42f3-9b00-f22f59292c94"DEVICE="eno16777984"ONBOOT="yes"IPADDR=192.168.1.216PREFIX=32GATEWAY=192.168.1.1DNS1=192.168.1.149 The Error: (found via "pcs status") * virtual_ip_start_0 on node1 'unknown error' (1): call=12, status=complete, exitreason='Unable to find nic or netmask.',last-rc-change='Fri Apr 29 02:03:57 2016', queued=1ms, exec=24ms I don't think its an IPTables problem, as I have it disabled currently along with all other Firewalls. I don't have SELinux disabled. I suspect I need another network config, but im a bit lost what to make the device= and really I am just moving from Ubuntu, so the layout is a bit new, but I love NMTUI! This looked promising regarding the interface, but I couldn't get it to work and I tried A LOT. Any help is appreciated. A few other reads I went thru https://www.centos.org/forums/viewtopic.php?t=50183 https://ihazem.wordpress.com/2011/03/15/adding-virtual-interface-to-centosredhat/ This is the guide I am following : http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/index.html#_add_a_resource As always, if you need more info, I am happy to provide it, thanks in advance! | The guide doesn’t have you add nic=eno### to this command, but it failed for me if I didn’t use it.You can find your device number via the following command cat /etc/sysconfig/network-scripts/ifcfg-e* | grep DEVICE Mine is eno16777984 so this is my example command. pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.218 cidr_netmask=32 nic=eno16777984 op monitor interval=30s Make sure it started using the following command: pcs cluster start --all && sudo pcs status resources | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130767/"
]
} |
279,985 | Pressing PrnScr "Print Screen" on the keyboard results in a screenshot being silently saved under /home/%user%/Pictures/ How can I change this location? | Open dconf-editor (note that you may need to install it first: sudo apt install dconf-editor ) Navigate to org.gnome.gnome-screenshot : org gnome gnome-screenshot Then enter a value for auto-save-directory in the format file:///path/to/directory/ e.g file:///home/yourusername/Pictures/screenshots/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27953/"
]
} |
279,987 | Last night I did an upgrade on my fedora 24 alpha install and when booting first time today I just ended up with a black screen. Then I tried booting into rescue mode, which left me with a shell after showing the fedora loading screen. I tried reverting the last update listed with dnf historylist undo id# , but that failed because it doesn't connect to the network. In the shell, it shows this line before prompting for root pwd: dracut-pre-udev[302]: Symbol 'svc_auth_none' has different size in shared object, consider relinking Any ideas how I could revert the last update? EDIT: I have looked through the log provided by journalctl -xb and there seem to be a lot of systemd errors related to mounting all kinds of drives, so that is probably the reason why it won't boot. Funny thing is that my hardware setup has not changed one bit, drives are all working as supposed to. I forgot to add that I tried booting into the two previous alpha kernel versions to no avail (though both used to work previous to yesterdays update). EDIT2: Ouput of journalctl -xb -p3 : -- Logs begin at Mit 2016-01-20 15:01:49 CET, end at Fre 2016-04-29 17:06:53 CEST. --Apr 29 19:06:48 localhost systemd[1]: Device dev-disk-by\x2dpartlabel-Microsoft\x5cx20reserved\x5cx20partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1 and /sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1Apr 29 19:06:48 localhost systemd[1]: Device dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda2 and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdd/sdd1Apr 29 19:06:48 localhost systemd[1]: Device dev-disk-by\x2dpartlabel-Basic\x5cx20data\x5cx20partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb2 and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdd/sdd4Apr 29 19:06:50 localhost rpcbind[314]: cannot open file = /tmp/rpcbind.xdr for writingApr 29 19:06:50 localhost rpcbind[314]: cannot save any registrationApr 29 19:06:50 localhost rpcbind[314]: cannot open file = /tmp/portmap.xdr for writingApr 29 19:06:50 localhost rpcbind[314]: cannot save any registrationApr 29 17:06:50 linux.fritz.box systemd[1]: Failed to mount NFSD configuration filesystem.-- Subject: Unit proc-fs-nfsd.mount has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit proc-fs-nfsd.mount has failed.-- -- The result is failed.Apr 29 17:06:50 linux.fritz.box systemd[1]: dev-disk-by\x2dpartlabel-Microsoft\x5cx20reserved\x5cx20partition.device: Dev dev-disk-by\x2dpartlabel-Microsoft\x5cx20reserved\x5cx20partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1Apr 29 17:06:51 linux.fritz.box systemd[1]: dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device: Dev dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/sda2 and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdd/sdd1Apr 29 17:06:51 linux.fritz.box systemd[1]: dev-disk-by\x2dpartlabel-Basic\x5cx20data\x5cx20partition.device: Dev dev-disk-by\x2dpartlabel-Basic\x5cx20data\x5cx20partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb2 and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdd/sdd4Apr 29 17:06:51 linux.fritz.box systemd[1]: Failed to mount /boot/efi.-- Subject: Unit boot-efi.mount has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit boot-efi.mount has failed.-- -- The result is failed.Apr 29 17:06:53 linux.fritz.box systemd[1]: Failed to mount /mnt/20DF1A322D28FF74.-- Subject: Unit mnt-20DF1A322D28FF74.mount has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit mnt-20DF1A322D28FF74.mount has failed.-- -- The result is failed. EDIT3: Contents of /etc/systemd/system/default.target # This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.[Unit]Description=Graphical InterfaceDocumentation=man:systemd.special(7)Requires=multi-user.targetWants=display-manager.serviceConflicts=rescue.service rescue.targetAfter=multi-user.target rescue.service rescue.target display-manager.serviceAllowIsolate=yes | This worked for me. Add the following to your kernel parameter. selinux=1 enforcing=0 This sets the SELinux enforcement mode from strict to permissive . This is a temporary solution until I figure out what is going on, or until an update fixes the problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114295/"
]
} |
279,991 | I wanted to move 1000 files in a directory to another dir. I can do it if I'm in the original dir but if I try this mv $(ls /home/jeremy/source|tail -1000) /home/jeremy/dest from elsewhere, the path gets stripped and so I would need to append the path somehow. I find the $ convenient and wanted to avoid xargs as it seems tricky. | This worked for me. Add the following to your kernel parameter. selinux=1 enforcing=0 This sets the SELinux enforcement mode from strict to permissive . This is a temporary solution until I figure out what is going on, or until an update fixes the problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/279991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118522/"
]
} |
280,015 | This process keeps hogging my bandwidth: What does this process do? Is it safe to kill it? Is is safe toremove the package as a whole( to prevent it from starting up everagain) Or should I just prevent it from automatically running in the background again? I am running Fedora 23. | PackageKit is being run by GNOME Software. It's not doing automatic updates, but it is downloading them so they're ready. There is work on making this be smarter, including not doing it over bandwidth-constrained connections, but it hasn't landed yet. In the meantime, you can disable this by running: dconf write /org/gnome/software/download-updates false or by using the GUI dconf editor and unchecking the download-updates box under org > gnome > software : Note that this is per-user. For changing the default for everyone, see the GNOME docs . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/280015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166084/"
]
} |
280,025 | I saw many place introduce screen to run background job stably even log out. They use screen -dmS name According to screen -h , this option means -dmS name Start as daemon: Screen session in detached mode. What is daemon? I don't understand. I found that if I simply type screen , I can enter automatically into a screen. After I run some command, and press Ctrl+a d , and then log off. The job is still running fine. So is this simple approach OK? Do I really need -dmS to make background job stable? Let me try to give a summary: Anything run in screen is safe to logging out (but you should detach the screen, not quit screen when you log out), no matter what the option you have set to screen. -dmS is just an option convienient for submitting jobs in background noniteractively. That is screen -dmS nameOfScreen command | You would only use -dm if you want to run a command in a screen session and not enter it interactively -S is just to give the session a usable name so you can reconnect to it again easily later If you want to use it interactively and don't want to give it a human readable name, you can omit all of those arguments safely. For example, if you just want to start up screen to run the command, say, /path/to/longTime and you don't want to watch it run you could do it either as screen -dmS longSession /path/to/longTime or you could do screen -S longSession$ /path/to/longTime ctrl a d Both would accomplish the same thing, but one is both easier to script and a bit less typing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31374/"
]
} |
280,066 | When I work on my server remotely, sometimes my SSH connections get dropped due to network issues. When I re-connect to my server, the dropped sessions remain open. I can see them when I run w . I'm aware that I can kill them using their PID, but I would like to auto-kill dropped sessions, if that's possible. How can I achieve that? | Enable one of the SSH keepalive messages, for example by enabling TCPKeepAlive or ClientAliveInterval in the server's sshd config. Similarly, in the client config you can use TCPKeepAlive and ServerAliveInterval . TCPKeepAlive used to just be KeepAlive , if you have an old version of OpenSSH. TCP keepalives are a feature that is part of TCP, and operates outside the encrypted tunnel built by SSH. So someone could, for example, spoof them to pretend the connection is still open when it isn't. ClientAlive/ServerAlive operates inside the encrypted tunnel, so it can't be spoofed (but I believe it's a new option, and of course costs more CPU time). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167722/"
]
} |
280,067 | I have a bash script that I want to remove files from various directories. Frequently, they won't be there because they weren't generated and that's fine. Is there a way to get the script not to report that error, but if rm has some other output to report that? Alternately, is there a better command to use to remove the file that'll be less noisy? | Use the -f option. It will silently ignore nonexistent files. From man rm : -f, --force ignore nonexistent files and arguments, never prompt [The "never prompt" part means that (a) -f overrides any previously specified -i or -I option, and (b) write-protected files will be deleted without asking.] Example Without -f , rm will complain about missing files: $ rm nonesuchrm: cannot remove ‘nonesuch’: No such file or directory With -f , it will be silent: $ rm -f nonesuch$ | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/280067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17808/"
]
} |
280,105 | Is there a way to monitor temperature or reads/writes of and NVMe drive (in this case Intel 750). hdparm , udisksctl , smartctl , and hddtemp all seem to lack this capability, google searches have been fruitless. For the curious, this is the only difficulty I've faced running Fedora 23 (Workstation) using NVMe for the system drive. | Using nvme-cli, I can get temperature from a Samsung 950 Pro with this command: nvme smart-log /dev/nvme0 | grep "^temperature" You can get other informations too: nvme smart-log /dev/nvme0Smart Log for NVME device:nvme0 namespace-id:ffffffffcritical_warning : 0temperature : 45 Cavailable_spare : 100%available_spare_threshold : 10%percentage_used : 0%data_units_read : 3,020,387data_units_written : 2,330,810host_read_commands : 26,960,077host_write_commands : 15,668,236controller_busy_time : 65power_cycles : 98power_on_hours : 281unsafe_shutdowns : 68media_errors : 0num_err_log_entries : 63Warning Temperature Time : 0Critical Composite Temperature Time : 0 Note: using kernel 4.6.4 For users access: /etc/sudoers # For users group%users ALL = NOPASSWD: nvme smart-log /dev/nvme0 | grep "^temperature"# For allALL ALL = NOPASSWD: nvme smart-log /dev/nvme0 | grep "^temperature" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/280105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119389/"
]
} |
280,166 | I try to replace a character, 2 lines after matching pattern. For this, I'm using this code: sed '/some_pattern/{N;N;s/word1/word2/}' /etc/filesystems > /etc/filesystems.tmp && mv -f /etc/filesystems.tmp /etc/filesystems I tested this command and confirmed it's working fine on Linux. However, when I use it in AIX, I receive an error message like: sed: Function /some_pattern/{N;N;s/word1/word2/} cannot be parsed. Any idea? | AIX sed needs each command on a separate line.See man page and try sed '/some_pattern/{ N N s/word1/word2/}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168924/"
]
} |
280,168 | How to make properly secure ssh account with empty password for running trusted binary? I'd like to make a kind of "virtual ssh Kiosk" for random users, while limiting "demo app" behind ssh "pseudo-user". By "secure", I mean, "secure for server running demo app for random visitors". Basically to serve application behind ssh account as service, in similar manner like https serves websites. (Let's assume for purpose of this question, that we trust /bin/cat or /usr/bin/cat - depending on server's system, please check yours with which echo ) While working on https://goo.gl/TjhrWd , I've encountered problem with making password empty. PAM rejects it. how is it configured Here is configuration I use and works when password is set for user cat - it is also described in https://goo.gl/TjhrWd : # below configured on Ubuntu Server 14.04 LTSaddgroup catonlyCHROOT=/var/chroot/cat# now let's make pseudo-shell binary, executing your ForceCommand (check source)# you can use bash as default shell instead, I prefer sth less bloated.cd /tmpwget 'https://gist.githubusercontent.com/gwpl/abcbc74c2bf377945a49097237edfb9b/raw/1993e8acc4bd66329959b1a658fcce4296d2a80c/only_exec_command.c'gcc only_exec_command.c -static -o only_exec_commandmkdir -p "$CHROOT"/{bin,lib,lib64,dev/pts,home/cat}chown root:root /var/chroot "$CHROOT"# dependig on distro it might be /usr/bin/cat -> check with `which cat`useradd -d /home/cat -s /bin/only_exec_command -M -N -g catonly catpasswd -d cat# Let's make chrootcp /tmp/only_exec_command "$CHROOT"/bin/cp /bin/cat "$CHROOT"/bin/mknod -m 666 "$CHROOT"/dev/null c 1 3ldd /bin/cat # tells us which libraries to copycp /lib/x86_64-linux-gnu/libc.so.6 "$CHROOT"/libcp /lib64/ld-linux-x86-64.so.2 "$CHROOT"/lib64chown cat:catonly "$CHROOT"/home/catchown root:root /var/chroot/cat /var/chroot /var$ $EDITOR /etc/ssh/sshd_config # add:Match user cat ChrootDirectory /var/chroot/cat X11Forwarding no AllowTcpForwarding no # dependig on distro it might be /usr/bin/cat -> check with `which cat` ForceCommand /bin/cat PasswordAuthentication yes PermitEmptyPasswords yes symptoms suggest it's PAM But trying to ssh , results in asking for password, and providing empty results in deny: ssh [email protected]@1.2.3.4's password:Permission denied, please try again. On server side running in debugging mode, there is nothing interesting in logs, let me quote server side part, during login, after entering empty password: /usr/sbin/sshd -ddd -p 1234(...)debug1: userauth-request for user echo service ssh-connection method password [preauth]debug1: attempt 2 failures 1 [preauth]debug2: input_userauth_request: try method password [preauth]debug3: mm_auth_password entering [preauth]debug3: mm_request_send entering: type 12 [preauth]debug3: mm_auth_password: waiting for MONITOR_ANS_AUTHPASSWORD [preauth]debug3: mm_request_receive_expect entering: type 13 [preauth]debug3: mm_request_receive entering [preauth]debug3: mm_request_receive enteringdebug3: monitor_read: checking request 12debug3: PAM: sshpam_passwd_conv called with 1 messagesdebug1: PAM: password authentication failed for echo: Authentication failuredebug3: mm_answer_authpassword: sending result 0debug3: mm_request_send entering: type 13Failed password for echo from 192.168.1.1 port 43816 ssh2debug3: mm_auth_password: user not authenticated [preauth]debug3: userauth_finish: failure partial=0 next methods="publickey,password" [preauth] | You need to tell PAM also that you want to allow empty passwords. There is some outdated tutorial describing that. But in short: sudo sed -i 's/nullok_secure/nullok/' /etc/pam.d/common-auth should do the job. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9689/"
]
} |
280,217 | Let's say I've got two repositories aye and bee and I want to get rid of bee , in such a way that the linear history of bee/master is "replayed" in a new subdirectory (several levels deep) of aye . I just want the files and the commit messages, I don't care about commit IDs. I do want a sensible history, so git subtree add --prefix=subdirectory and git read-tree --prefix=subdirectory/ are not what I'm looking for. Both repositories are private, so there's no risk of rewriting history for someone else. However, bee does have a submodule cee . | First, rewrite bee 's history to move all files into the subdirectory : cd /path/to/beegit filter-branch --force --prune-empty --tree-filter 'dir="my fancy/target directory"if [ ! -e "$dir" ]then mkdir --parents "${dir}" git ls-tree --name-only "$GIT_COMMIT" | xargs -I files mv files "$dir"fi' git log --stat should show every file appearing under my fancy/target directory . Now you can merge the history into aye with ease : cd /path/to/ayegit remote add -f bee /path/to/beegit checkout -b bee-master bee/mastergit rebase mastergit checkout mastergit rebase bee-master Recreate the submodule in aye : git submodule add git://my-submodule 'my fancy/target directory/my-submodule' Finally you can clean up aye : git rm 'my fancy/target directory/.gitmodules'git branch --delete --force bee-mastergit remote remove bee You may also have to fix any absolute paths in your repository (for example in .gitignore ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
280,220 | I am trying to create a script to upload some files to a server via SFTP. I can do that manually by opening an interactive lftp-session and providing username and password there. For the script, I would like to not hardcode credentials in the script (for obvious reasons) not mention them on the commandline (I want the command in my .bash_history, but of course not the credentials) have lftp read the credentials from .netrc or something similar I cannot seem to get this working. My current workaround is a wrapper-script that parses the .netrc for the credentials and adds them to a lftp-script which I delete afterwards. This simulates the steps I perform manually, but seems like re-implementing existing functionality poorly. While this works, the question remains: Can lftp read .netrc for SFTP-connections? If so, are there special syntax-requirements if customs ports are part of the setup? I need to connect to sftp://username:[email protected]:12322 . | First, rewrite bee 's history to move all files into the subdirectory : cd /path/to/beegit filter-branch --force --prune-empty --tree-filter 'dir="my fancy/target directory"if [ ! -e "$dir" ]then mkdir --parents "${dir}" git ls-tree --name-only "$GIT_COMMIT" | xargs -I files mv files "$dir"fi' git log --stat should show every file appearing under my fancy/target directory . Now you can merge the history into aye with ease : cd /path/to/ayegit remote add -f bee /path/to/beegit checkout -b bee-master bee/mastergit rebase mastergit checkout mastergit rebase bee-master Recreate the submodule in aye : git submodule add git://my-submodule 'my fancy/target directory/my-submodule' Finally you can clean up aye : git rm 'my fancy/target directory/.gitmodules'git branch --delete --force bee-mastergit remote remove bee You may also have to fix any absolute paths in your repository (for example in .gitignore ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24580/"
]
} |
280,222 | I want to generate a QR code of my 4096-bit armored GPG private key. The key is so big, the program qrencode seems to fail because of its size. $ gpg --export-secret-keys --armor > ~/private.key$ ./qrencode -o test.png < ~/private.key Result: Failed to encode the input data: Numerical result out of range How can I make that happen? Are there alternative programs to qrencode which can handle a very big GPG key? I want to print it on paper as this security.SE question suggested. The comments of @geruetzel and @ cuonglm are addressing this version of my question . | Your error message already gives a hint as to what's wrong! Your one-liner is providing the actual file content as filename to the qrencode program. Hence the error message. Try qrencode -o test.png -t png < private.key . You should take a look at shell input-output redirection. For example, I/O Redirection . I see that you too have found your way to the developers GitHub repository of qrencode :) There is an explanation why a 4096-bit key cannot be encoded as a QR code: qrencode is encoding your private GPG key as 8 bit (binary|utf-8), because the key is not pure alphanumeric. It contains special character. the alphanumeric mode only supports those special character .(%*+-./:). So the maximum GPG key can only be 2953 char long. From https://github.com/fukuchi/libqrencode/issues/31 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117978/"
]
} |
280,264 | I upgraded ubuntu 14.04 to ubuntu 16.04 and I have a problem with the internet connection. Specifically, DNS after update stopped working. For debugging purposes I set the only DNS 8.8.8.8 , but name resolution still doesn't work. The output of nmcli device show wlan1 | grep IP4 is: pc@pc:~$ nmcli device show wlan1 | grep IP4IP4.ADDRESS[1]: 192.168.1.3/24IP4.GATEWAY: 192.168.1.1IP4.ROUTE[1]: dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000IP4.DNS[1]: 8.8.8.8The output from dig @8.8.8.8 google.com and dig google.com:dig @8.8.8.8 google.com; <<>> DiG 9.10.3-P4-Ubuntu <<>> @8.8.8.8 google.com; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60075;; flags: qr rd ra; QUERY: 1, ANSWER: 12, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 512;; QUESTION SECTION:;google.com. IN A;; ANSWER SECTION:google.com. 27 IN A 62.75.23.245google.com. 27 IN A 62.75.23.230google.com. 27 IN A 62.75.23.216google.com. 27 IN A 62.75.23.238google.com. 27 IN A 62.75.23.224google.com. 27 IN A 62.75.23.223google.com. 27 IN A 62.75.23.237google.com. 27 IN A 62.75.23.210google.com. 27 IN A 62.75.23.217google.com. 27 IN A 62.75.23.231google.com. 27 IN A 62.75.23.244google.com. 27 IN A 62.75.23.251;; Query time: 89 msec;; SERVER: 8.8.8.8#53(8.8.8.8);; WHEN: Sat Apr 30 19:39:24 EEST 2016;; MSG SIZE rcvd: 231pc@pc:~$ dig google.com; <<>> DiG 9.10.3-P4-Ubuntu <<>> google.com;; global options: +cmd;; connection timed out; no servers could be reachedpc@pc:~$ route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 600 0 0 wlan1169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan1192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlan1 | I solve the problem by using Amrish instructions at Ask Ubuntu Stack Exchange, i.e. by using the following code: sudo rm /etc/resolv.confsudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.confsudo resolvconf -u | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168368/"
]
} |
280,333 | When I am in bash and press Esc , Shift + K , V , bash fires up $EDITOR with a filename similar to /tmp/bash-fc-186566385 . Why is that and what is its purpose? I probably need to mention that I am running bash with set -o vi . | This allows you to construct a command with full Vi editing. If you type some commands in and save exit :wq the commands will be run. CLARIFICATION: it allows you to construct the command in whatever editor you have set in $EDITOR and when you save and quit from it the contents will be run. (Clarified that it's not just Vi!) ALSO, as noted by RealSkeptic , the shift + K combination isn't required to bring up the editor. Simply esc , V will. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6479/"
]
} |
280,430 | Bash manual says: A parameter is set if it has been assigned a value. The null string is a valid value. ... If value is not given, the variable is assigned the null string. Is the null string the same as "" ? Are their lengths both zero? Can both be tested by conditional expressions -z or -n which tests if the length of a string is zero or nonzero? | Yes. Using the test from this answer : $ [ -n "${a+1}" ] && echo "defined" || echo "not defined"not defined$ a=""$ [ -n "${a+1}" ] && echo "defined" || echo "not defined"defined$ b=$ [ -n "${b+1}" ] && echo "defined" || echo "not defined"defined So setting the variable to "" is the same as setting it to empty value. Therefore, empty value and "" are the same. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
280,440 | I'm trying to install 64-bit Debian stable on a Lenovo Thinkpad. When I get to the installation step that installs the bootloader, I get this message: An installation step failed. You can try running the failing item again from the menu, or skip it and choose something else. The failing step is: Install the GRUB boot loader on a hard disk Going back to the menu and selecting LILO gives me the same error. The installation log says May 1 13:24:23 main-menu[188]: WARNING **: Configuring 'grub-installer' failed with error code 1May 1 13:24:23 main-menu[188]: WARNING **: Menu item 'grub-installer' failed.May 1 13:24:28 main-menu[188]: INFO: Menu item 'lilo-installer' selectedMay 1 13:24:28 main-menu[188]: WARNING **: Unable to set title for fdisk-udeb.May 1 13:24:28 main-menu[188]: WARNING **: Configuring 'lilo-installer' failed with error code 1May 1 13:24:28 main-menu[188]: WARNING **: Menu item 'lilo-installer' failed. I'm not using LVM or RAID. So far, I've tried Disabling UEFI boot and using legacy boot instead. The error still occurs, with both GRUB and LILO. Following the instructions on this question and running parted /dev/nvme01set 1 bios_grub on from TTY2, but I get an error that says parted not found . On my system /dev/nvme01 is the only hard disk Check for hardware errors. When I first purchased the system I ran all the available hardware tests, both from within the BIOS and from within Windows, and it passed all of them. I'm assuming that means the hardware isn't malfunctioning. Per this thread that had a similar error, albeit with LVM, I tried redoing the partitioning with a small /boot partition at the beginning, formatted with ext2 . Same error. Switching to TTY4 to look at the installation output, I also see the error chroot: can't execute 'grub-probe': No such file or directory Searching for information on that turns up this thread and this bug report related to GRUB, but a) those are old, and b) I've run through the installation up to this point over a dozen times now and I get the error every time, so it doesn't seem like a one-off thing. I've used Gparted to check that the hard disk is completely empty. Secure boot is disabled in the BIOS. I've run the installation using the full DVD and the netinstall CD; both are booted from USB, but the problem persists. I was able to successfully create an msdos partition table and three partitions (for / , /home , and swap ) on the drive in the previous installation step, so I don't know why GRUB suddenly can't write to the drive. How do I fix this and get Debian installed? As of now, the (brand new!) system is completely unusable because I can't get an OS on it. Could part of the problem be that Debian/parted recognizes the disk incorrectly? It says the disk is 512.1 GB, which is true in the sense that the specs say 512 GB and that's what is advertised, and it will let me allocate all 512 GB to various partitions. However, if I load it in Gparted, the actual disk space is closer to 476 GB, but I assumed that's just the usual 1024 vs 1000 stuff. (I also posted a version of this question on the Debian forums , so I'll update my question with anything important from that thread and vice versa.) | Here is what worked for me, using Debian jessie (stable). I basically took the instructions from this wiki post , and stripped out all the steps about dual-booting with Windows, since those didn't apply to my case. In the BIOS, set "UEFI only" boot. Using Gparted, create a FAT32 partition at the beginning of the disk with the boot and esp flags. (The Debian installer should be able to do this too, but since the installer incorrectly recognized the size of the disk, I prefer to use Gparted). In my case, the FAT32 partition is /dev/nvme0n1p1. During the installation, make sure you have a network connection configured (manually or automatically, doesn't matter). Otherwise, the next step will fail. At the installation stage where GRUB fails to install, open a shell and run the following commands: mount --bind /dev /target/devmount --bind /dev/pts /target/dev/ptsmount --bind /proc /target/procmount --bind /sys /target/syscp /etc/resolv.conf /target/etcchroot /target /bin/bashaptitude updateaptitude install grub-efi-amd64update-grubgrub-install --target=x86_64-efi /dev/nvme0n1 Exit the shell and select "Continue without installing a bootloader." You'll see a warning message that gives you boot commands to use; you can ignore this. Once the installation completes, boot into the system. Add "nvme" to /etc/initramfs-tools/modules, then run update-initramfs -u as root. Edit /etc/default/grub and add this line GRUB_CMDLINE_LINUX="intel_pstate=no_hwp" and add "nomodeset" to the GRUB_CMDLINE_LINUX_DEFAULT so it looks like this: GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset" Run update-grub . The last few commands (initramfs onward) are necessary to prevent disk not found errors the second time you try to boot into the new system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49592/"
]
} |
280,441 | So I have the following problem. I want to install BlackArch on an USB. I downloaded the Live ISO (64Bit) and used PowerISO to make a bootable USB. I am on Windows 8.1 so I hit Windows + C , went to Settings, holded Shift and hit restart. Then I booted from the USB. After that this error showed up: Failed to start loader.efi: Not found" Also the file vmlinuz.efi was not found. After that I used the Fedora Media USB Creator. After booting from the USB the loader was found but then the next error showed up: :: Mounting '/dev/disk/by-label/BLACKARCH_201601' to 'run/archiso/bootmnt'Waiting 30 seconds for device /dev/disk/by-label/ARCH_201212 ...ERROR: '/dev/disk/by-label/BLACKARCH_201601' device did not show up after 30 seconds...Falling back to interactive promptYou can try to fix the problem manually, log out when you are finishedsh: can't access tty; job control turned off So I found out that I have to rename the USB to BLACKARCH_201601 but there is a problem. The max length for renaming is 11 characters. So I googled and found out that I can change the label with an autorun.inf file. But that didn't work. Secure and Fastboot are disabled. Its an ASUS Laptop with Windows 8.1 and UEFI. Any ideas? EDIT: Okay so now I checked out the Files on the USB Stick and there was a .conf file which had a line Called label=BLACKARCH_201601 . I changed it to ARCH_EFI and renamed the USB ARCH_EFI . That worked! Now he could mount /dev/disk/by-label/ARCH_EFI . But now there is a new error: Failed to mount /dev/loop0 | Here is what worked for me, using Debian jessie (stable). I basically took the instructions from this wiki post , and stripped out all the steps about dual-booting with Windows, since those didn't apply to my case. In the BIOS, set "UEFI only" boot. Using Gparted, create a FAT32 partition at the beginning of the disk with the boot and esp flags. (The Debian installer should be able to do this too, but since the installer incorrectly recognized the size of the disk, I prefer to use Gparted). In my case, the FAT32 partition is /dev/nvme0n1p1. During the installation, make sure you have a network connection configured (manually or automatically, doesn't matter). Otherwise, the next step will fail. At the installation stage where GRUB fails to install, open a shell and run the following commands: mount --bind /dev /target/devmount --bind /dev/pts /target/dev/ptsmount --bind /proc /target/procmount --bind /sys /target/syscp /etc/resolv.conf /target/etcchroot /target /bin/bashaptitude updateaptitude install grub-efi-amd64update-grubgrub-install --target=x86_64-efi /dev/nvme0n1 Exit the shell and select "Continue without installing a bootloader." You'll see a warning message that gives you boot commands to use; you can ignore this. Once the installation completes, boot into the system. Add "nvme" to /etc/initramfs-tools/modules, then run update-initramfs -u as root. Edit /etc/default/grub and add this line GRUB_CMDLINE_LINUX="intel_pstate=no_hwp" and add "nomodeset" to the GRUB_CMDLINE_LINUX_DEFAULT so it looks like this: GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset" Run update-grub . The last few commands (initramfs onward) are necessary to prevent disk not found errors the second time you try to boot into the new system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168478/"
]
} |
280,453 | Could you explain the following sentences from Bash manual about $_ , especially the parts in bold, maybe with some examples? At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list . Subsequently , expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command . When checking mail , this parameter holds the name of the mail file. | I agree it's not very clear. 1. At shell startup, if the _ variable was in the environment that bash received , then bash leaves it untouched. In particular, if that bash shell was invoked by another bash shell (though zsh , yash and some ksh implementations also doit), then that bash shell will have set the _ environmentvariable to the path of the command being executed (that's the 3rdpoint in your question). For instance, if bash is invoked tointerpret a script as a result of another bash shell interpreting: bash-script some args That bash will have passed _=/path/to/bash-scrip in theenvironment given to bash-script , and that's what the initialvalue of the $_ bash variable will be in the bash shell thatinterprets that script. $ env -i _=whatever bash -c 'echo "$_"'whatever Now, if the invoking application doesn't pass a _ environmentvariable , the invoked bash shell will initialise $_ to the argv[0] it receivesitself which could be bash , or /path/to/bash or /path/to/some-script or anything else (in the example above, thatwould be /bin/bash if the she-bang of the script was #! /bin/bash or /path/to/bash-script depending on thesystem ). So that text is misleading as it describes the behaviour of thecaller which bash has no control over. The application that invoked bash may very well not set $_ at all (in practice, only someshells and a few rare interactive applications do, execlp() doesn'tfor instance), or it could use it for something completely different(for instance ksh93 sets it to *pid*/path/to/command ). $ env bash -c 'echo "$_"'/usr/bin/env (env did not set it to /bin/bash, so the value we get is the one passed to env by my interactive shell)$ ksh93 -c 'bash -c "echo \$_"'*20042*/bin/bash 2. Subsequently The Subsequently is not very clear either. In practice, that's as soon as bash interprets a simple command in the current shell environment. In the case of an interactive shell , that will be on the first simple command interpreted from /etc/bash.bashrc for instance. For instance, at the prompt of an interactive shell: $ echo "$_" ] (the last arg of the last command from my ~/.bashrc) $ f() { echo test; } $ echo "$_" ] (the command-line before had no simple command, so we get the last argument of that previous echo commandline) $ (: test) $ echo "$_" ] (simple command, but in a sub-shell environment) $ : test $ echo "$_" test For a non-interactive shell , it would be the first command in $BASH_ENV or of the code fed to that shell if $BASH_ENV is notset. 3. When Bash executes a command The third point is something different and is hinted in the discussion above. bash , like a few other shells will pass a _ environment variable to commands it executes that contains the path that bash used as the first argument to the execve() system calls. $ env | grep '^_'_=/usr/bin/env 4. When checking mail The fourth point is described in more details in the description of the MAILPATH variable: 'MAILPATH' A colon-separated list of filenames which the shell periodically checks for new mail . Each list entry can specify the message that is printed when new mail arrives in the mail file by separating the filename from the message with a '?'. When used in the text of the message, '$_' expands to the name of the current mail file. Example: $ MAILCHECK=1 MAILPATH='/tmp/a?New mail in <$_>' bashbash$ echo test >> /tmp/aNew mail in </tmp/a> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/280453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
280,457 | I have increased the font size by going to System Settings -> Font. This changed it for some applications like Konsole and File explorer. But this doesn't reflect in other applications like Firefox and Intellij. | The "System Settings" and "Font" dialog will (as a rule) affect only desktop applications which have been integrated with it. In your configuration, those are KDE applications. In X, most applications manage their fontsizes by themselves, and are not affected by these dialogs. Or, even if there is some nominal integration, it may not be maintained. For instance, the thread firefox ignores system font settings on ubuntu summarizes the cause by The problem is that (non-repository) Firefox builds in Ubuntu or any Linux distro, by default, uses it's own Cairo rather than system Cairo for rendering UI fonts. I forgot which bug report (Bugzilla, Launchpad) I found this in since it was some time ago but the fix is fairly easy. along with a fix for that application: In about.config set layout.css.dpi to 0 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
280,463 | I have installed debian 8 jessie via usb stick on my thinkpad x250 (the laptop doesn't have any cd-dvd rom). Now when i want to use the synaptic or install some programms (especifically pure data) via: sudo apt-get install puredata i get this in terminal: Media change: please insert the disc labeled 'Debian GNU/Linux 8.4.0 Jessie - Official i386 DVD Binary-1 20160402-13:26'in the drive '/media/cdrom/' and press enter Why this happens and how am i supposed to insert a dvd when i haven't any dvd-rom? How can i fix this? | The installer considers that the USB stick you used for installation is a CD or DVD image; it left a reference to it in /etc/apt/sources.list as a source of packages for later. If you only wish to install packages from the Internet, you can remove the line mentioning cdrom: in /etc/apt/sources.list , then update the package cache with sudo apt-get update You could use the USB stick again but that would be somewhat more complicated. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
280,500 | I have a complicated scenario, with most of the complicated parts working: SD card contains a Fedora 23 Raspberry Pi install on a LUKS/BTRFS filesystem. Fedora 23 x86-64 VM being used to manage the SD card install of Fedora for the RPi. I can mount the SD card in my VM just fine and chroot into it using a static build of QEMU ARM, which was for some reason really difficult to obtain. However, inside my chroot, I cannot access the network. When I've done similar things in Ubuntu/Debian, I can always access the network. I have disabled SELinux :( , and done a lot of bind-mounting to get things generally working, but I cannot get access to the network. I copied in /etc/sysconfig/network-scripts/ifcfg-enp0s3 , and tried just about everything else I could think of. My chroot setup looks something like this: cryptsetup luksOpen /dev/sdXN picryptmount -t btrfs /dev/mapper/picrypt /mntmount -t proc none /mnt/procmount -t sysfs none /mnt/sysmount --bind /dev /mnt/devmount --bind /dev/pts /mnt/dev/ptsmount --bind /run /mnt/runchroot /mnt/ I've noticed that a lot more filesystems are mounted from my host's perspective, but I'm assuming that they're carried over automatically and just not appearing in the chroot for some reason. Here are the mounts my host knows about: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=2013260k,nr_inodes=503315,mode=755)securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)configfs on /sys/kernel/config type configfs (rw,relatime)/dev/mapper/fedora-root on / type ext4 (rw,relatime,seclabel,data=ordered)selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)tmpfs on /tmp type tmpfs (rw,seclabel)debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)nfsd on /proc/fs/nfsd type nfsd (rw,relatime)/dev/sda1 on /boot type ext4 (rw,relatime,seclabel,data=ordered)/dev/mapper/fedora-home on /home type ext4 (rw,relatime,seclabel,data=ordered)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)tmpfs on /run/user/42 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=404748k,mode=700,uid=42,gid=42)tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=404748k,mode=700,uid=1000,gid=1000)gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)/dev/sr0 on /run/media/naftuli/VBOXADDITIONS_5.0.20_106931 type iso9660 (ro,nosuid,nodev,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)/dev/sdXN on /run/media/naftuli/PI-BOOT type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)tmpfs on /home/naftuli/tmp type tmpfs (rw,relatime,seclabel,mode=700,uid=1000,gid=1000) My chroot knows about a lot less than this: /dev/mapper/picrypt on / type btrfs (rw,relatime,seclabel,compress=lzo,space_cache,subvolid=5,subvol=/)none on /proc type proc (rw,relatime)devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=2013260k,nr_inodes=503315,mode=755)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)sysfs on /sys type sysfs (ro,relatime,seclabel) In order to get a working chroot with networking support, do I really need to mount all of those /sys sub-filesystems into my chroot? | To set up networking for your chrooted session you need to copy the DNS configuration into the chroot environment : cp /etc/resolv.conf /mnt/etc/resolv.conf Or ln -s /etc/resolv.conf /mnt/etc/resolv.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
280,524 | I need to concatenate chunks from two files: if I needed concatenate whole files, I could simply do cat file1 file2 > output But I need to skip first 1MB from the first file, and I only want 10 MB from the second file. Sounds like a job for dd . dd if=file1 bs=1M count=99 skip=1 of=temp1dd if=file2 bs=1M count=10 of=temp2cat temp1 temp2 > final_output Is there a possibility to do this in one step? ie, without the need to save the intermediate results? Can I use multiple input files in dd ? | dd can write to stdout too. ( dd if=file1 bs=1M count=99 skip=1 dd if=file2 bs=1M count=10 ) > final_output | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/280524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
280,528 | I am wondering if there is a Unix equivalent for the Windows environment variable PATHEXT . For those with no Windows background: Adding a file suffix to PATHEXT allows me to execute a script without typing that suffix in cmd.exe. For example, on my Windows computer, PATHEXT contains the suffix .pl and when I want to execute a Perl script in cmd.exe, I simply can type my-script and it gets executed. Yet, in order to execute the same script in bash, I need to write the full name: my-script.pl . Since I work on both Windows and Unix currently, I almost always fall into the trap of forgetting to type the suffix when going to a Unix box again. | short: no longer: shell scripts require a full filename, but you can define aliases for your commands to refer to them by various names. For example alias my-script=my-script.pl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6479/"
]
} |
280,532 | Is it possible to define a m4 macro (without arguments), which expands to 1 on first invocation, expands to 2 on second invocation, and so on? In other words, it should have internal memory storing the number of times it is invoked. Can this be done? | You can do that by having two macros, a counter holding the current value, and a count macro that expands to the value and redefines `counter'. For example, it could look like this define(`counter',`0')dnldefine(`count',`define(`counter',eval(counter+1))counter')dnl When the count macro is used, it firstly redefines counter to hold its next value (adding 1 to its present value), and then it uses that value. I'm not immediately sure how to do this with a single macro, and if that's an important aspect of your problem then this is not the answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168567/"
]
} |
280,541 | I would like a drop down terminal which drops down when I move the mouse to the top (or bottom, left, right), similar to how the panel can be configured to auto-hide and only drop down or pop up if the mouse hovers to the border where it is. Currently I only found a method using shortcuts e.g. F12 to bind to e.g. xfce4-terminal --drop-down . I'm using XFCE 4.12, but I'm not particularly fixed to XFCE or xfce4-terminal , so if some other desktop or terminal supports this, it would also help. | You can do that by having two macros, a counter holding the current value, and a count macro that expands to the value and redefines `counter'. For example, it could look like this define(`counter',`0')dnldefine(`count',`define(`counter',eval(counter+1))counter')dnl When the count macro is used, it firstly redefines counter to hold its next value (adding 1 to its present value), and then it uses that value. I'm not immediately sure how to do this with a single macro, and if that's an important aspect of your problem then this is not the answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111050/"
]
} |
280,548 | I'm getting error NSS error -12286 while downloading a file from HTTPS using curl . I can download the same file without issues using wget so I can exclude any firewall or blacklist issues. Already tried, with no luck, options -k and --cipher ecdhe_ecdsa_aes_128_gcm_sha_256 , that is the server preferred cipher according to Qualys SSL Labs Test Server tool here: https://www.ssllabs.com/ssltest/analyze.html?d=intribunale.net&latest Here is the cURL log: # curl -v https://www.intribunale.net/immobili* About to connect() to www.intribunale.net port 443 (#0)* Trying 104.27.150.214... connected* Connected to www.intribunale.net (104.27.150.214) port 443 (#0)* Initializing NSS with certpath: sql:/etc/pki/nssdb* CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none* NSS error -12286* Closing connection #0* SSL connect errorcurl: (35) SSL connect error My lib versions are: # curl -Vcurl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2Protocols: tftp ftp telnet dict ldap ldaps http file https ftps scp sftpFeatures: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz | The solution was upgrading to cURL 7.42 using a third party repository for CentOS 6 or building from sources | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74331/"
]
} |
280,564 | I recently purchased a Corsair k65 RGB keyboard. Of course it didn't work at first, but with an ckb-opensource driver I got everything working on my arch system. Everything went so well until I started to get errors whenever I boot my system: usb_submit_urb(ctrl) failed: -1 appears on my screen and the system freezes for 30sec. After that the keyboard works and I can login on my system.But what does the error mean? [ 11.238682] hid-generic 0003:1B1C:1B17.0002: usb_submit_urb(ctrl) failed: -1[ 11.239526] hid-generic 0003:1B1C:1B17.0002: timeout initializing reports[ 11.239959] input: Corsair Corsair K65 RGB Gaming Keyboard as /devices/pci0000:00/0000:00:1c.7/0000:07:00.0/usb5/5-1/5-1:1.1/0003:1B1C:1B17.0002/input/input6[ 11.291882] hid-generic 0003:1B1C:1B17.0002: input,hidraw4: USB HID v1.11 Keyboard [Corsair Corsair K65 RGB Gaming Keyboard] on usb-0000:07:00.0-1/input1[ 21.291319] hid-generic 0003:1B1C:1B17.0003: timeout initializing reports[ 21.291585] hid-generic 0003:1B1C:1B17.0003: hiddev0,hidraw5: USB HID v1.11 Device [Corsair Corsair K65 RGB Gaming Keyboard] on usb-0000:07:00.0-1/input2[ 31.290650] hid-generic 0003:1B1C:1B17.0004: timeout initializing reports[ 31.290905] hid-generic 0003:1B1C:1B17.0004: hiddev0,hidraw6: USB HID v1.11 Device [Corsair Corsair K65 RGB Gaming Keyboard] on usb-0000:07:00.0-1/input3 If I use lsusb I get: Bus 005 Device 002: ID 1b1c:1b17 Corsair I have heard about "usbhid quirks" is a possible workaround. But how do I use this ? Or is there any possible solution for this ? | Solution for all Corsair mechanical keyboards with usbhid quirks. sudo nano /etc/default/grub or any other editor you like to use instead of nano. you will see this line GRUB_CMDLINE_LINUX_DEFAULT="" make sure to put the usbhid.quircks between the quotes and save that. In my case I had to change it to this line GRUB_CMDLINE_LINUX_DEFAULT="usbhid.quirks=0x1B1C:0x1B17:0x20000408" after that, update grub sudo update-grub *If that command is not found, you probably run grub 2.0. Use this command instead. update-grub command is just a script which runs the grub-mkconfig sudo grub-mkconfig -o /boot/grub/grub.cfg after that is done, reboot the system. Now it should work normal and the message won't appear. Use the quirks for your keyboards. You can use this list below for Corsair keyboards. K65 RGB: usbhid.quirks=0x1B1C:0x1B17:0x20000408K70: usbhid.quirks=0x1B1C:0x1B09:0x0x20000408K70 RGB: usbhid.quirks=0x1B1C:0x1B13:0x20000408K95: usbhid.quirks=0x1B1C:0x1B08:0x20000408K95 RGB: usbhid.quirks=0x1B1C:0x1B11:0x20000408Strafe: usbhid.quirks=0x1B1C:0x1B15:0x20000408Strafe RGB: usbhid.quirks=0x1B1C:0x1B20:0x20000408M65 RGB: usbhid.quirks=0x1B1C:0x1B12:0x20000408Sabre RGB Optical: usbhid.quirks=0x1B1C:0x1B14:0x20000408Sabre RGB Laser: usbhid.quirks=0x1B1C:0x1B19:0x20000408Scimitar RGB: usbhid.quirks=0x1B1C:0x1B1E:0x20000408 Update Linux kernel 4.11: HID fixes are support for some more Corsair mice and keyboards. K65RGB and K70RGB have their HID quirk fixes in Linux 4.11 for these devices. See commit: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=deaba636997557fce46ca7bcb509bff5ea1b0558 You can find out your kernel version to use this command in the terminal uname -r to sum up, if you have Linux kernel 4.11 there's a chance you don't need to go through this process for adding usbhid quirks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168546/"
]
} |
280,580 | When I'm performing float operations in shell using bc, the result is not the same if I use a regular calculator. Am I doing something wrong? For example, I need to find a volume of a sphere. User inputs the radius value. pi = 3.14volume=$(echo "scale = 2; (4 / 3) * $pi * ($r ^ 3)" | bc)echo "Volume is $volume" If radius = 3, unix returns 112.59, and the calculator 113.1. | You need to understand the meaning of the scale of an expression in bc . bc can do arbitrary precision (which doesn't necessarily mean infinite precision) while your calculator will probably have the precision of the float or double data type of your processor. In bc . The scale is the number of decimal after the dot, so related to the precision. The scale of an expression is determined based on rules that depend on which operator is involved and the scale variable (that variable is the one that gives the arbitrary dimension of the precision of bc that is, that can make its precision as big as you want). For instance, the scale of the result of a division is scale . So 4/3 when scale is 2 is 1.33 , so a very rough approximation of 4/3 . The scale of x * y will be min(a+b,max(scale,a,b)) (where a is the scale of x and b the scale of y ), so here 2 . so 1.33 * 3.14 will be 4.17 . For the rules, you can check the POSIX specification for bc . If you want a greater precision, increase scale . You can increase it indefinitely. With bc -l , scale is automatically set to 20 . $ pi='(a(1)*4)' r=3$ $ echo "(4 / 3) * $pi * ($r ^ 3)" | bc -l113.09733552923255658339$ echo "scale=1000; (4 / 3) * $pi * ($r ^ 3)" | bc -l113.0973355292325565846551617980621038310980983775038095550980053230\81390626303523950609253712316214447357331114478163039295378405943820\96034211293869262532022821022769726978675980014720642616237749375071\94371951239736040606251233364163241939497632687292433484092445725499\76355759335682169861368969085854085132237827361174295734753154661853\14730175311724413325296040789909975753679476982929026989441793959006\17331673453103113187002257495740245517842677306806456786589844246678\87098096084205774588430168674012241047863639151096770218070228090538\86527847499397329973941181834655436308584829346483609858475202045257\72294881898002877683392804259302509384339728638724440983234852757850\73357828522068813321247512718420036644790591105239053753290671891767\15857867345960859999994142720979823815034238137946746942088054039248\86988951308030971204086612694295227741563601129621951039171511955017\31142218396089302929537125655435196874321744263099764736353375070480\1468800991581641650380680694035580030527317911271523$ echo "scale=1; (4 / 3) * $pi * ($r ^ 3)" | bc -l97.2 You can also do all your calculations with a high scale , and reduce it in the end for display: $ echo "scale=10; (4 / 3) * $pi * ($r ^ 3)" | bc -l113.0973355107$ echo "scale=100; x = (4 / 3) * $pi * ($r ^ 3); scale = 10; x / 1" | bc -l113.0973355292 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168557/"
]
} |
280,615 | Is there a command line tool which shows in real time how much space remains on my external hard drive? | As Julie said, you can use df to display free space, passing it either the mount point or the device name: df --human-readable /homedf --human-readable /dev/sda1 You'll get something like this: Filesystem Size Used Avail Use% Mounted on/dev/sda1 833G 84G 749G 10% /home To run it continuously, use watch . Default update interval is 2 seconds, but you can tweak that with --interval : watch --interval=60 df --human-readable /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/280615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
280,622 | When I try to autocomplete files (with vim as argument 0): vim ~/.conf <TAB> It shows: _arguments:450: _vim_files: function definition file not found_arguments:450: _vim_files: function definition file not found_arguments:450: _vim_files: function definition file not found It was working fine before! Other commands: cat ~/.conf <TAB> give: cat ~/.config/ Why is zsh failing only at vim ? | Turns out that removing all ~/.zcompdump files solved it: rm -r ~/.zcompdump* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
280,634 | I have a link and I would like to return only content between www. and .com e.g www.blablabla.com would return only blablabla How could I do that? When I use grep '\.[a-zA-Z0-9\.-]*\. ' it gives me .blablabla. | $ echo "www.blablabla.com" | grep -oP '(?<=\.)[a-zA-Z0-9\.-]*(?=\.)' blablabla -o -- print only matched parts of matching line -P -- Use Perl regex (?<=\.) -- after a literal . , aka, a "positive look-behind" ... [a-zA-Z0-9\.-]* -- match zero or more instances of lower & upper case characters, numbers 0-9, literal . and hyphen ... (?=\.) -- followed by a literal . , aka a "positive look-ahead" See this link for more on look arounds . Tools like https://regex101.com/ can help you break down your regular expressions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160473/"
]
} |
280,641 | I have several files like file1 , file2 ... etc in the same directory and each file may contain several lines matching PATTERN . I would like to delete the N th line from each line matching PATTERN e.g. with N = 3 and file1 content like 1 no match2 PATTERN3 same PATTERN4 no match here5 no match here either6 another PATTERN7 again, no match8 no9 last line the expected output is 1 no match2 PATTERN3 same PATTERN4 no match here7 again, no match8 no Editing the files in-place is a bonus, not a requirement (though there's at least one gnu tool that I know of that could edit them all in one go...) A similar question was asked here however that is a particular case (there's just a single line matching pattern in each file and the solutions there would only work with multiple lines matching pattern if they're separated by at least N +1 non-matching lines). | You could use awk for this I believe like so: awk -vN=3 '/PATTERN/ {skips[FNR+N]=1;} {if(!(FNR in skips)) print;}' <file> so each time we hit PATTERN we'll record the line that is N away from here, and only print those lines we have not marked for skipping. with gawk you could use -i inplace as well to do it in place As you noted, that wouldn't handle multiple files. Of course, you could wrap with a for loop to iterate over all the files, but if there aren't enough to make the commandline too long you could also do it like so: awk -vN=3 '{if(FNR==1) split("", skips, ":");} /PATTERN/ {skips[FNR+N]=1;} {if(!(FNR in skips)) print;}' * where we reset skips to an empty array each time FNR hits 1, so the start of each file. With gnu awk you could write it as: gawk -i inplace 'FNR==1{delete nr};/PATTERN/{nr[FNR+3]++};!(FNR in nr)' file* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22142/"
]
} |
280,685 | Is there any clean, clear-cut POSIX equivalent to tac ? Readability as well as succinctness should both be considered. | The cleanest POSIX equivalent would be tail -r as -r Reverse. Copies lines from the specified starting point in the file in reverse order. The default for r is to print the entire file in reverse order. has been accepted for the next POSIX issue (and hopefully, it will be soon supported on all platforms). If tail -r is not available the "classic" text processing tools can be successfully used - as you and others have shown - to reverse the lines in a file. Readability and conciseness aside, even old ed can do it: ed -s infile <<\INg/^/m0,pqIN or, if it's the output from a pipeline that you want to reverse - read it into the text buffer first: ed -s <<\INr ! your | pipeline | goes | hereg/^/m0,pqIN | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
280,746 | I have a command-line script that performs an API call and updates a database with the results. I have a limit of 5 API calls per second with the API provider. The script takes more than 0.2 seconds to execute. If I run the command sequentially, it will not run fast enough and I will only be making 1 or 2 API calls per second. If I run the command sequentially, but simultaneously from several terminals, I might exceed the 5 calls / second limit. If there a way to orchestrate threads so that my command-line script is executed almost exactly 5 times per second? For example something that would run with 5 or 10 threads, and no thread would execute the script if a previous thread has executed it less than 200ms ago. | On a GNU system and if you have pv , you could do: cmd=' that command | to execute && as shell code'yes | pv -qL10 | xargs -n1 -P20 sh -c "$cmd" sh The -P20 is to execute at most 20 $cmd at the same time. -L10 limits the rate to 10 bytes per second, so 5 lines per second. If your $cmd s become two slow and causes the 20 limit to be reached, then xargs will stop reading until one $cmd instance at least returns. pv will still carry on writing to the pipe at the same rate, until the pipe gets full (which on Linux with a default pipe size of 64KiB will take almost 2 hours). At that point, pv will stop writing. But even then, when xargs resumes reading, pv will try and catch up and send all the lines it should have sent earlier as quickly as possible so as to maintain a 5 lines per second average overall. What that means is that as long as it's possible with 20 processes to meet that 5 run per second on average requirement, it will do it. However when the limit is reached, the rate at which new processes are started will not be driven by pv's timer but by the rate at which earlier cmd instances return. For instance, if 20 are currently running and have been for 10 seconds, and 10 of them decide to finish all at the same time, then 10 new ones will be started at once. Example: $ cmd='date +%T.%N; exec sleep 2'$ yes | pv -qL10 | xargs -n1 -P20 sh -c "$cmd" sh09:49:23.34701348609:49:23.52744683009:49:23.70759166409:49:23.88818248509:49:24.06825701809:49:24.33857086509:49:24.51896349109:49:24.69920664709:49:24.87972232809:49:25.14998815209:49:25.330095169 On average, it will be 5 times per second even if the delay between two runs will not always be exactly 0.2 seconds. With ksh93 (or with zsh if your sleep command supports fractional seconds): typeset -F SECONDS=0n=0; while true; do your-command & sleep "$((++n * 0.2 - SECONDS))"done That puts no bound on the number of concurrent your-command s though. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30018/"
]
} |
280,767 | I found something for videos, which looks like this. ffmpeg -i * -c:v libx264 -crf 22 -map 0 -segment_time 1 -g 1 -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*9)" -f segment output%03d.mp4 I tried using that for an audio file, but only the first audio file contained actual audio, the others were silent, other than that it was good, it made a new audio file for every second. Does anyone know what to modify to make this work with audio files, or another command that can do the same? | This worked for me when I tried it on a mp3 file. $ ffmpeg -i somefile.mp3 -f segment -segment_time 3 -c copy out%03d.mp3 Where -segment_time is the amount of time you want per each file (in seconds). References Splitting an audio file into chunks of a specified length 4.22 segment, stream_segment, ssegment - ffmpeg documentation | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/280767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
280,784 | The example below is simplified to show the core of the problem, not the problem itself(my file tree is more complicated than that). Let's say that I have two files I want to back up; one in ~/somedir/otherdir , the other in ~/otherdir/somedir/ . I want to backup files from both directories in one git repository. How can I do this? Soft links only carry information about where file is stored, not the actual file, whereas hard links are somewhat foreign to me. Is this a case where hard links should be used? Clarification: I want to use git because of four reasons: I want to store dotfiles/scripts/configurations that are text files and keep track of changes over time, I know git, I have a private git repository I could use to store them, and I want to be able to share these files across multiple PCs. | If you don't mind moving the files... You could do this by moving the files into a git repository, and symlinking them to their old locations; you'd end up with ~/gitrepo/somedir/otherdir/file1 moved from ~/somedir/otherdir/file1 ~/gitrepo/otherdir/somedir/file2 moved from ~/otherdir/somedir/file2 a symlink from ~/somedir/otherdir/file1 to ~/gitrepo/somedir/otherdir/file1 a symlink from ~/otherdir/somedir/file2 to ~/gitrepo/otherdir/somedir/file2 You can then safely commit the files in the git repository, and manipulate them using git, and anything using the old file paths will see whatever is current in the git workspace. Linking the files the other way round, or using hard links, would be dangerous because any git operation which re-writes the file (changing branches, reverting to a previous version...) would break the link. (Hopefully this explains why hard links aren't really a viable solution.) With this kind of scenario you'll have to be careful with programs which re-write files completely, breaking links; many text editors do this, as do tools such as sed -i etc. A safer approach would be to move the entire folders into the git repository, and symlink the directories. If you want to keep the files in place... Another possibility is to create a git repository in your home directory, tell git to ignore everything, and then forcefully add the files you do want to track: cdgit initecho '*' > .gitignoregit add -f .gitignoregit commit -m "Initialise repository and ignore everything"git add -f somedir/otherdir/file1git commit -m "Add file1"git add -f otherdir/somedir/file2git commit -m "Add file2" Once you've done this, you'll easily be able to track changes to files you've explicitly added, but git won't consider new files. With this setup it should also be safe to have other git repositories in subdirectories of your home directory, but I haven't checked in detail... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
280,816 | How to calculate disk usage of a file tree but excluding directories. I'd like to have something like: du --exclude type d I use rsync to mirror/backup part of my home dir and I want to double check total size after backup but for some reason one directory got different size on source and target namely: 12288 B and 16384 B. While obviously most of directories got 4096 B. Both source and target are ext4. | Simply feed it a list of everything you DO want counted using --files0-from find -type f -print0 | du --files0-from=- | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154433/"
]
} |
280,860 | The command hidepid is used to prevent users from seeing all processes that do not belong to them, but it doesn't offer the possibility of selecting a specific process. Is it possible to hide only one process on a Linux machine? | A bit dirty, and there is probably a cleaner solution (maybe using SELinux or grsec), but you can hide a process by mounting an empty directory inside of /proc/<pid> . For example, something like this: mount -o bind /empty/dir /proc/42 will prevent regular users from seeing process 42. They will, however, see that something is hidden as they will be able to see the mount point. If you want to do this for a service you would have to do this every time it is started, using its init script or whatever. If you want to hide the pid only from a specific user, you could play with namespaces (maybe using pam_namespace ) to have the mount bind done only in the namespace of the target user. In order to reverse this, simply run: umount /proc/42 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
280,879 | I use gpg-agent for managing both PGP e SSH identities. The agent is started with a script like this gpg_agent_env="$XDG_CACHE_HOME/gpg-agent.env"export GPG_TTY="$(tty)"if ! ps -U "$USER" -o ucomm | grep -q gpg-agent; then eval "$({gpg-agent --daemon | tee $gpg_agent_env} 2> /dev/null)"else source "$gpg_agent_env" 2> /dev/nullfi which is sourced whenever I run an interactive shell.Everything works fine with this setup but there is an issue. Let's say I: open a terminal (launching the agent in background) and start working after a while open a second terminal do an action that requires entering a passphrase in the second terminal At this point gpg-agent will start pinentry-curses prompting a passphrase but it will do this in the first terminal which results in its output mixed with whatever was running (usually a text editor) with no way to resume the program or stop pinentry (it starts using 100% cpu and I have to kill it). I must be doing something wrong here. Anyone has experienced this? Update: I figured out this happens only for a prompt to unlock an SSH key, which looks like this ,while prompts for PGP keys always open on the correct (i.e. current) tty. | As per the upstream bug against openssh, the proper way to this is adding the following to your ~/.ssh/config : Match host * exec "gpg-connect-agent UPDATESTARTUPTTY /bye" This has worked for me perfectly so far. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110465/"
]
} |
280,881 | I have a function that has my EV3 speak speak(){ espeak -a 200 -s 130 -v la --stdout "$@" | aplay; } it works by simply speak "Say this" I want it to say the contents of a file, so I have this printf '%b\n' "$(cat joyPhrase)" How do get the output from the printf into the quotes for speak? | As per the upstream bug against openssh, the proper way to this is adding the following to your ~/.ssh/config : Match host * exec "gpg-connect-agent UPDATESTARTUPTTY /bye" This has worked for me perfectly so far. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/280881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135289/"
]
} |
280,954 | Example File : first column can have fixed set of 4 unordered values world1.com,world2.com,world3.com or world4.com second column is a key belongs to each line such that each of the four sets has unique random key. world4.com /randomkeyhghgdh778/key67567world1.com /randomkeygahjuh572/key639839world2.com /randomkey788gauh72/key63whjkworld3.com /randomkey788gauh72/key63whjkworld1.com /randomkeyhueh34778/key67uuu77world4.com /randomkey8998382/key6hh77686world3.com /randomkey7HHHH0000/key6333355kworld2.com /randomkeyJJJJ1111/key63333 and so on Desired Output: world1.com /randomkeygahjuh572/key639839world2.com /randomkey788gauh72/key63whjkworld3.com /randomkey788gauh72/key63whjkworld4.com /randomkeyhghgdh778/key67567world1.com /randomkeyhueh34778/key67uuu77world2.com /randomkeyJJJJ1111/key63333world3.com /randomkey7HHHH0000/key6333355kworld4.com /randomkey8998382/key6hh77686 | To organize the files by world: $ paste -d'\n' <(grep world1 file) <(grep world2 file) <(grep world3 file) <(grep world4 file)world1.com /randomkeygahjuh572/key639839world2.com /randomkey788gauh72/key63whjkworld3.com /randomkey788gauh72/key63whjkworld4.com /randomkeyhghgdh778/key67567world1.com /randomkeyhueh34778/key67uuu77world2.com /randomkeyJJJJ1111/key63333world3.com /randomkey7HHHH0000/key6333355kworld4.com /randomkey8998382/key6hh77686 How it works We can use grep to select the lines for each world: $ grep world4 fileworld4.com /randomkeyhghgdh778/key67567world4.com /randomkey8998382/key6hh77686 paste combines lines from multiple files. The paste command could look like this: paste -d'\n' file1 file2 file3 file3. We don't actually have to create true files for each world. Instead, we can create file-like objects for each one using process substitution : paste -d'\n' <(grep world1 file) <(grep world2 file) <(grep world3 file) <(grep world4 file) Process substitution is supported by bash, zsh, and AT&T ksh88 and ksh93 but not dash, pdksh, or mksh. Extra feature: sorting by key To illustrate the flexibility of this approach, we will sort the keys for each world. Note: sorting breaks up sets of lines. Do not use this if you want to keep sets together. We can separate the worlds using grep , and then sort each one, and then merge the lines back together using paste : $ paste -d'\n' <(grep world1 file | sort -k2,2) <(grep world2 file | sort -k2,2) <(grep world3 file | sort -k2,2) <(grep world4 file | sort -k2,2)world1.com /randomkeygahjuh572/key639839world2.com /randomkey788gauh72/key63whjkworld3.com /randomkey788gauh72/key63whjkworld4.com /randomkey8998382/key6hh77686world1.com /randomkeyhueh34778/key67uuu77world2.com /randomkeyJJJJ1111/key63333world3.com /randomkey7HHHH0000/key6333355kworld4.com /randomkeyhghgdh778/key67567 Note that sort depends on locale. Different locales may result in different orders. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
280,996 | How do I find files that have ~ * and other special characters in the name? e.g. find . -name "*\*" should match "any characters" and then * , but it matches nothing; how can I get the command to correctly match the files? | Implementations of find vary, but they should all handle character classes in wildcards (POSIX.2, section 3.13): find . -name '*[~*]*' If newline is among your "special" characters, you may need to work out how to get your shell to pass it to find. In Bash, you can use find . -name $'*[\t \n]*' to show files containing whitespace, for example. A simpler method, if supported, is to use a character class: find . -name '*[[:space:]]*' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/280996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144031/"
]
} |
281,029 | I'm currently redirecting the output of a monitoring tool to a file, however what I'd like to do, is to redirect this output to a new file on my request (using a keybinding), without stopping the said tool. Something like monitor_program | handle_stdout Where handle_stdout allows me to define a new file where to put the log at certain point. I know I could easily write it, but I'm wondering if there's any tool that already allows this. | I'll suggest a named pipe. Create a pipe mkfifo p (call it whatever you want, if not 'p') Create a "reader" script that reads from the pipe and writes wherever you like Tell the monitoring program to write its logs to the named pipe Here's a sample reader script that reads from a named pipe 'p' and writes the data to an indexed 'mylog' file: #!/bin/shINDEX=0switchlog() { read INDEX < newindex echo now writing to "mylog.$INDEX"}trap switchlog USR1while :do cat p >> mylog."$INDEX"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168860/"
]
} |
281,084 | I'm trying to create a Procmail rule based on all of From, Subject and a string in the body: :0 B:* ^From:.*[email protected].** ^Subject:.*fixed string in the subject line.** .*fixed string in the body.*/dev/null I'm trying to delete a persistently problematic mail source whose only safe option is to check all three of these. What am I doing wrong here? Presumably this is something do do with the B flag? | You need both H and B if you want to match headers and body. See the Procmail Tips page, full of useful examples. Try :0 HB* ^From:.*[email protected]* ^Subject:.*fixed string in the subject line* fixed string in the body/dev/null (note, the above doc refers to a bug in version 3.22 whereby once HB is used further uses of just B will still look through H). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168900/"
]
} |
281,089 | So I stupidly inadvertently destroyed ~/.bashrc. If I have open terminals with the settings that were previously there, is there a way to export the current settings back to a new .bashrc? (I've tried set > ~/.bashrc from one of said terminals with some measure of success, but wondering if there's some more magical way.) | One thing you can try is to recover your .bashrc from the memory of a running instance of bash. On Linux, run gcore PID to make a memory dump of a process specified by its PID. Whether this has a chance of working depends on how bash manages its memory; I haven't checked the source code to see if it's at all possible. It doesn't work for me on Debian jessie amd64. If that doesn't work, you can save your current settings, but you can't recover the way they got set, so a lot of information will be lost. If you had configuration that depends on the machine, on the terminal type, etc. then you'll only recover the settings for whatever instances of bash are still running. Print out all variables in a form that can be read back. This includes a lot of noise that you'll have to sort out. Environment variables (marked with declare -x ) shouldn't be defined in your .bashrc but you might have done so anyway. Remove variables that bash sets automatically (check the manual and look at the output of declare -p in bash --norc ). declare -p Print out all functions. This includes functions not defined by you, for example functions defined by the completion system (for which you want . /etc/bash_completion instead). declare -f Print out aliases. These can probably be used as they are. alias Print out shell options. Compare with the output of shopt in bash --norc to see what you changed. shopt Print out completion settings (if you use the context-sensitive completion system). Most of these probably come from the completion system; finding the ones you've tuned might be a little difficult. complete Print out key bindings, if you've defined key bindings in your .bashrc rather than in .inputrc . This includes default bindings. bind -p From now on, back up all your files, and put your configuration files under version control. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83301/"
]
} |
281,111 | The first ssh command is done on my computer, I enter the key password, then put the second ssh command in while logged into the server. ssh -i cloudkey -L 6000:localhost:6001 [email protected] -p 9000#i get prompted for a password to use the key ssh -D 6001 -p 6666 localhost -l dancloud#i get prompted for a password associated with user dancloud How can I combine these commands into a single command to obtain the same results? I see that netcat and ProxyCommand could be useful here but haven't been able to figure it out. How can I hardcode the two passwords in and put this into a bash script? Hopefully I could just do ./login.sh and run all this code with passwords hard coded into the script and reach the same final result. | First possibility is obvious (note the -t switch): ssh -t -i cloudkey -L 6000:localhost:6001 [email protected] -p 9000 \ "ssh -D 6001 -p 6666 localhost -l dancloud" With ProxyCommand it is more complicated on the first sight, but conceptually you need only one forwarding ( netcat version is not advised anymore, using -W switch is more elegant): Host proxy Hostname 54.152.188.55 User admin IdentityFile cloudkeyHost target Hostname localhost Port 6666 User dancloud DynamicForward 6000 ProxyCommand ssh -W %h:%p proxy and then connect just using ssh target (see ... now you don't even need a bash script :)). Explanation: The second ssh is running also from your computer and therefore the Dynamic forwarding socket (SOCKS proxy) is created directly on your computer). About the passwords, it is not something that is advisable (passwords should be secret ), but it might work with sshpass in front of appropriate ssh . Manual page for ssh explains the -W switch as: -W host : port Requests that standard input and output on the client be forwarded to host on port over the secure channel. Implies -N , -T , ExitOnForwardFailure and ClearAllForwardings . Works with Protocol version 2 only. In combination with ProxyCommand , it connects to the requested hostname and then gives you basically embedded version of netcat (connects standard IO to the host : port pair (the argument). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168767/"
]
} |
281,158 | I have about 200 GB of log data generated daily, distributed among about 150 different log files. I have a script that moves the files to a temporary location and does a tar-bz2 on the temporary directory. I get good results as 200 GB logs are compressed to about 12-15 GB. The problem is that it takes forever to compress the files. The cron job runs at 2:30 AM daily and continues to run till 5:00-6:00 PM. Is there a way to improve the speed of the compression and complete the job faster? Any ideas? Don't worry about other processes and all, the location where the compression happens is on a NAS , and I can run mount the NAS on a dedicated VM and run the compression script from there. Here is the output of top for reference: top - 15:53:50 up 1093 days, 6:36, 1 user, load average: 1.00, 1.05, 1.07Tasks: 101 total, 3 running, 98 sleeping, 0 stopped, 0 zombieCpu(s): 25.1%us, 0.7%sy, 0.0%ni, 74.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.1%stMem: 8388608k total, 8334844k used, 53764k free, 9800k buffersSwap: 12550136k total, 488k used, 12549648k free, 4936168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7086 appmon 18 0 13256 7880 440 R 96.7 0.1 791:16.83 bzip27085 appmon 18 0 19452 1148 856 S 0.0 0.0 1:45.41 tar cjvf /nwk_storelogs/compressed_logs/compressed_logs_2016_30_04.tar.bz2 /nwk_storelogs/temp/ASPEN-GC-32459:nkp-aspn-1014.log /nwk_stor30756 appmon 15 0 85952 1944 1000 S 0.0 0.0 0:00.00 sshd: appmon@pts/030757 appmon 15 0 64884 1816 1032 S 0.0 0.0 0:00.01 -tcsh | The first step is to figure out what the bottleneck is: is it disk I/O, network I/O, or CPU? If the bottleneck is the disk I/O, there isn't much you can do. Make sure that the disks don't serve many parallel requests as that can only decrease performance. If the bottleneck is the network I/O, run the compression process on the machine where the files are stored: running it on a machine with a beefier CPU only helps if the CPU is the bottleneck. If the bottleneck is the CPU, then the first thing to consider is using a faster compression algorithm. Bzip2 isn't necessarily a bad choice — its main weakness is decompression speed — but you could use gzip and sacrifice some size for compression speed, or try out other formats such as lzop or lzma. You might also tune the compression level: bzip2 defaults to -9 (maximum block size, so maximum compression, but also longest compression time); set the environment variable BZIP2 to a value like -3 to try compression level 3. This thread and this thread discuss common compression algorithms; in particular this blog post cited by derobert gives some benchmarks which suggest that gzip -9 or bzip2 with a low level might be a good compromise compared to bzip2 -9 . This other benchmark which also includes lzma (the algorithm of 7zip, so you might use 7z instead of tar --lzma ) suggests that lzma at a low level can reach the bzip2 compression ratio faster. Just about any choice other than bzip2 will improve decompression time. Keep in mind that the compression ratio depends on the data, and the compression speed depends on the version of the compression program, on how it was compiled, and on the CPU it's executed on. Another option if the bottleneck is the CPU and you have multiple cores is to parallelize the compression. There are two ways to do that. One that works with any compression algorithm is to compress the files separately (either individually or in a few groups) and use parallel to run the archiving/compression commands in parallel. This may reduce the compression ratio but increases the speed of retrieval of an individual file and works with any tool. The other approach is to use a parallel implementation of the compression tool; this thread lists several. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138527/"
]
} |
281,174 | I have seen % and then some letter for example %h in config files in Ubuntu and in other Linux distributions but I do not know the names of them or what they all do, so I am just wondering if there is a resource that lists them all and/or gives descriptions of what they all do? If am not mistaken I think that %h is a variable for a users home directory but I am not positive. | The meaning of %h in a config file should be documented in the corresponding program's man page. Here are some examples for %h : smb.conf - the Internet hostname that Samba is running on. date - same as %b (locale's abbreviated month name (e.g., Jan)) ssh_config - various (e.g. remote host name) sshd_config - home directory of the user being authenticated | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147202/"
]
} |
281,194 | currently I have a problem with a server. One user who is hosting a lot of sites got hacked and some of his php files were modified. Now I want to get a list of the infected files and also want to check if he cleaned the whole mess. The common thing between the infected files is that the first line is very long. So I'd like to find every php file on the server that has a min length of 1000 chars. Well, I can find all php files with "find" and get with "head -n 1" the first line and count the chars with "wc -m". But how can I combine it together? | You can do it with just find and awk : find . -type f -name '*.php' -size +1000c -exec awk ' FNR > 1 {nextfile} length >= 1000 {print FILENAME}' {} + The awk script skips to next file after the first line of every file. It prints the filename of the current file if the current line is >= 1000 characters long. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168990/"
]
} |
281,201 | Trying to check for 3 conditions in one line of code, but I'm stuck. Essentially, I need to code for the following: IF string1 is not equal to string2 AND string3 is not equal to string4 OR bool1 = true THEN display "conditions met - running code ...". As requested in the comments, I've updated my example to try to make the problem clearer. #!/bin/shstring1="a"string2="b"string3="c"string4="d"bool1=true# the easy-to-read way ....if [ "$string1" != "$string2" ] && [ "$string3" != "$string4" ] ; then echo "conditions met - running code ..."fiif $bool1 ; then echo "conditions met - running code ..."fi# or the shorter way ...[ "$string1" != "$string2" ] && [ "$string3" != "$string4" ] && echo "conditions met - running code ..."$bool1 && echo "conditions met - running code ..." The code above will potentially run twice: if the first 2 conditions are met, and then again if the 3rd condition is met. This is not what I need. The issue with this example is that it involves 2 distinct calls to 'echo' - (note: in the real code, it's not an echo, but you get the idea). I'm trying to reduce the code duplication by combining the 3 condition check into a single command. I'm sure there's a few people now shaking their heads and shouting at the screen "That's NOT how you do it!" And there's probably others waiting to mark this as a duplicate ... well, I looked but I'm damned if I could figure out how to do this from the answers I've read. Can someone please enlighten me ? :) | This will work: if [ "$string1" != "$string2" ] \ && [ "$string3" != "$string4" ] \ || [ "$bool1" = true ]; then echo "conditions met - running code ...";fi; Or surround with { ;} for readability and easy to maintain in future. if { [ "$string1" != "$string2" ] && [ "$string3" != "$string4" ] ;} \ || [ "$bool1" = true ] ; then echo "conditions met - running code ...";fi; Points to note: There is no such thing as a boolean variable. . Braces need the final semicolon ( { somecmd;} ) . && and || evaluate left-to-right in the above — && has higher precedence than || only within (( )) and [[..]] && higher precedence only happen in [[ ]] is proven as follows. Assume bool1=true . With [[ ]] : bool1=trueif [[ "$bool1" == true || "$bool1" == true && "$bool1" != true ]]; then echo 7; fi #1 # Print 7, due to && higher precedence than ||if [[ "$bool1" == true ]] || { [[ "$bool1" == true && "$bool1" != true ]] ;}; then echo 7; fi # Same as #1if { [[ "$bool1" == true || "$bool1" == true ]] ;} && [[ "$bool1" != true ]] ; then echo 7; fi # NOT same as #1if [[ "$bool1" != true && "$bool1" == true || "$bool1" == true ]]; then echo 7; fi # Same result as #1, proved that #1 not caused by right-to-left factor, or else will not print 7 here With [ ] : bool1=trueif [ "$bool1" == true ] || [ "$bool1" == true ] && [ "$bool1" != true ]; then echo 7; fi #1, no output, due to && IS NOT higher precedence than ||if [ "$bool1" == true ] || { [ "$bool1" == true ] && [ "$bool1" != true ] ;}; then echo 7; fi # NOT same as #1if { [ "$bool1" == true ] || [ "$bool1" == true ] ;} && [ "$bool1" != true ]; then echo 7; fi # Same as #1if [ "$bool1" != true ] && [ "$bool1" == true ] || [ "$bool1" == true ]; then echo 7; fi # Proved that #1 not caused by || higher precedence than &&, or else will not print 7 here, instead #1 is only left-to-right evaluation | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110015/"
]
} |
281,302 | A trivially conflicting package foo can be made to work with bar , by running dpkg --force-conflicts -i foo . But eventually it's time to upgrade, and 'apt-get' objects: % apt-get upgradeReading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt-get -f install' to correct these.The following packages have unmet dependencies: foo : Conflicts: bar but 0.2-1 is installedE: Unmet dependencies. Try using -f. Can apt-get be tweaked/forced to tolerate the (pretty much fixed) conflict, then upgrade? (Quickie existence proof: uninstall foo , then upgrade, then reinstall foo as before. Therefore it is possible, the question is finding the least cumbersome mechanism.) An example, but this question is not about any two particular packages. For several years GNU parallel has had a trivial conflict with moretutils ; each provides /usr/bin/parallel . dpkg can force co-existence: # assume 'moreutils' is already installed, and 'parallel' is in# apt's cache directory.dpkg --force-conflicts -i /var/cache/apt/archives/parallel_20141022+ds1-1_all.deb This creates a diversion, renames the moreutils version to /usr/bin/parallel.moreutils . Both programs work, until the user upgrades. I tried an -o option, but that didn't bring on peace: apt-get -o Dpkg::Options::="--force-conflicts" install parallel moreutils Possible -o options number in the hundreds, however... | Since OP asked for a list of commands (with which to change the relevant metadata of the package) in the comments to Gilles' answer, here it is: # download .debapt download parallel# alternatively: aptitude download parallel# unpackdpkg-deb -R parallel_*.deb tmp/# make changes to the package metadatased -i \ -e '/^Version:/s/$/~nomoreutconfl/' \ -e '/^Conflicts: moreutils/d' \ tmp/DEBIAN/control# pack anewdpkg-deb -b tmp parallel_custom.deb# installdpkg -i parallel_custom.deb This is under the assumptions that the conflicts line only has moreutils as an entry (and without version restrictions) as was the case in my installation. Otherwise, use '/^Conflicts:/s/\(, \)\?moreutils\( [^,]\+\)\?//' as the second sed script to only remove the relevant part of the line and support version restrictions. Your installed package won't be overwritten by newer versions from the repository and you have to manually repeat this procedure for every update to the GNU parallel package if you want to keep this package up-to-date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165517/"
]
} |
281,305 | I'm attempting to add this alias to my .zshrc but run into errors. I've tried escaping the ==, ", and && values and still no luck. What am I missing? alias broke="ssh -t [email protected] tail -f documents/dir/`date -u +%Y%m%d`.log | awk '$2=="ABC:" && int($5)>=26 || int($5)<=-26'" | Since OP asked for a list of commands (with which to change the relevant metadata of the package) in the comments to Gilles' answer, here it is: # download .debapt download parallel# alternatively: aptitude download parallel# unpackdpkg-deb -R parallel_*.deb tmp/# make changes to the package metadatased -i \ -e '/^Version:/s/$/~nomoreutconfl/' \ -e '/^Conflicts: moreutils/d' \ tmp/DEBIAN/control# pack anewdpkg-deb -b tmp parallel_custom.deb# installdpkg -i parallel_custom.deb This is under the assumptions that the conflicts line only has moreutils as an entry (and without version restrictions) as was the case in my installation. Otherwise, use '/^Conflicts:/s/\(, \)\?moreutils\( [^,]\+\)\?//' as the second sed script to only remove the relevant part of the line and support version restrictions. Your installed package won't be overwritten by newer versions from the repository and you have to manually repeat this procedure for every update to the GNU parallel package if you want to keep this package up-to-date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169083/"
]
} |
281,309 | I used to believe that the appropriate way of breaking the lines in a list is command1 && \command2 It turned out that it isn't so , one doesn't need \ $ [ $(id -u) -eq 1000 ] && > echo yesyes The same works with pipes | the same way. The bash man page sections on pipelining and lists didn't shed any light on this. Thus , my question is : what is the proper usage of \ to break long lines ? | If the statement would be correct without continuation, you need to use \ . Therefore, the following works without a backslash, as you can't end a command with a && : echo 1 &&echo 2 Here, you need the backslash: echo 1 2 3 \4 or echo 1 \&& echo 2 Otherwise, bash would execute the command right after processing the first line without waiting for the next one. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/281309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
281,349 | UPDATE 1: userone@desktop:~$ sudo umount "/media/userone/New Volume"umount: /media/userone/New Volume: mountpoint not founduserone@desktop:~$ sudo cryptsetup luksClose /dev/mapper/luks-04cb4ea7-7bba-4202-9056-a65006fe52d7Device /dev/mapper/luks-04cb4ea7-7bba-4202-9056-a65006fe52d7 is not active.userone@desktop:~$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsdb 8:16 1 29.5G 0 disk └─sdb1 8:17 1 29.5G 0 part └─luks_USB 252:3 0 29.5G 0 crypt sr0 11:0 1 1024M 0 rom userone@desktop:~$ sudo cryptsetup luksOpen /dev/sdb1 luks_USBDevice luks_USB already exists.userone@desktop:~$ sudo mkdir /media/userone/luks_USBmkdir: cannot create directory ‘/media/userone/luks_USB’: File existsuserone@desktop:~$ sudo mount /dev/mapper/luks_USB /media/userone/luks_USBmount: wrong fs type, bad option, bad superblock on /dev/mapper/luks_USB, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.userone@desktop:~$ dmesg | tail[20639.663250] JBD2: no valid journal superblock found[20639.663257] EXT4-fs (dm-3): error loading journal[20828.133606] JBD2: no valid journal superblock found[20828.133613] EXT4-fs (dm-3): error loading journal[20832.682397] JBD2: no valid journal superblock found[20832.682405] EXT4-fs (dm-3): error loading journal[20851.042343] JBD2: no valid journal superblock found[20851.042349] EXT4-fs (dm-3): error loading journal[21053.115711] JBD2: no valid journal superblock found[21053.115718] EXT4-fs (dm-3): error loading journaluserone@desktop:~$ ORIGINAL QUESTION: When I plug in my encrypted USB drive, I get this message in a GNOME dialog: Error mounting /dev/dm-3 at /media/userone/New Volume: Command line mount -t "ext4" \ -o "uhelper=udisks2,nodev,nosuid" \ "/dev/dm-3" "/media/userone/New Volume"'exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/mapper/luks-04cb4ea7-7bba-4202-9056-a65006fe52d7, missing codepage or helper program, or other error.In some cases, useful info is found in syslog - try dmesg | tail or so. Anyone know how this can be corrected? It was working fine yesterday. | It looks as though the journal has become corrupt, doing some searches over the past few days, this seems to not be uncommon on devices that use LUKS. You could try running an fsck on the device, acknowledging that any data on the device may not be accessible after - you may like to use dd to make a copy of the drive before this. A common resolution appears to be to create the EXT4 file system from scratch with journaling disabled using mke2fs -t ext4 -O ^has_journal /dev/device . Obviously you would lose the advantages of having a journaled file system by doing this, and lose any data on the device! Problem This problem is that the EXT4 file system’s journal has become corrupt. The problem is perhaps made a little obscure due to the fact that the device is encrypted and the file system resides “inside” the encryption. Resolution There is a thread of comments below, however I thought a summary here would be more beneficial to anyone who might come across this in the future. Unencrypt the device, this allows us to get at the device that the EXT4 file system resides on: sudo cryptsetup luksOpen /dev/sdb1 luks_USB Create an image of the device that has been created in the previous step. We need to do this because file system checking utils generally won’t work on mounted devices, and although the device with EXT4 on isn’t mounted, it’s “parent” is. sudo dd if=/dev/dm-3 of=/tmp/USBimage.dd (add bs and count arguments as you see fit). Now we have an image, we can run the file system checks: sudo e2fsck /tmp/USBimage.dd any problems found can be evaluated and fixed as required. You can check to see if your file system has been fixed by attempting to mount the image: sudo mount -o loop /tmp/USBimage.dd /mnt At this point the OP was able to gain access to their files. While I would suggest wiping the USB stick and starting over (back to a known state, etc), I think it would be possible to unmount the image from /mnt and then copy if back onto the device that become corrupt: sudo dd if=/tmp/USBimage.dd of=/dev/dm-3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
281,352 | Is there a simple way to do this with redirection or pipes without creating FILE1? I want to apply process2 to the body of the output of process1, without touching the first and last few lines. process1 >FILE1head -n 3 FILE1tail -n +4 FILE1|head -n -4|process2 # producing outputtail -n 4 FILE1rm -f FILE1 | It looks as though the journal has become corrupt, doing some searches over the past few days, this seems to not be uncommon on devices that use LUKS. You could try running an fsck on the device, acknowledging that any data on the device may not be accessible after - you may like to use dd to make a copy of the drive before this. A common resolution appears to be to create the EXT4 file system from scratch with journaling disabled using mke2fs -t ext4 -O ^has_journal /dev/device . Obviously you would lose the advantages of having a journaled file system by doing this, and lose any data on the device! Problem This problem is that the EXT4 file system’s journal has become corrupt. The problem is perhaps made a little obscure due to the fact that the device is encrypted and the file system resides “inside” the encryption. Resolution There is a thread of comments below, however I thought a summary here would be more beneficial to anyone who might come across this in the future. Unencrypt the device, this allows us to get at the device that the EXT4 file system resides on: sudo cryptsetup luksOpen /dev/sdb1 luks_USB Create an image of the device that has been created in the previous step. We need to do this because file system checking utils generally won’t work on mounted devices, and although the device with EXT4 on isn’t mounted, it’s “parent” is. sudo dd if=/dev/dm-3 of=/tmp/USBimage.dd (add bs and count arguments as you see fit). Now we have an image, we can run the file system checks: sudo e2fsck /tmp/USBimage.dd any problems found can be evaluated and fixed as required. You can check to see if your file system has been fixed by attempting to mount the image: sudo mount -o loop /tmp/USBimage.dd /mnt At this point the OP was able to gain access to their files. While I would suggest wiping the USB stick and starting over (back to a known state, etc), I think it would be possible to unmount the image from /mnt and then copy if back onto the device that become corrupt: sudo dd if=/tmp/USBimage.dd of=/dev/dm-3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169121/"
]
} |
281,361 | I installed and run OpenBSD and I wanted to install git but there were no repositories for OpenBSD. What should I do if I want to install some programs with pkg_add? | On 6.0 and below, add a mirror to the file /etc/pkg.conf : installpath = http://ftp.eu.openbsd.org/pub/OpenBSD/5.9/packages/amd64/ On 6.1 and later, use the file /etc/installurl : https://ftp.eu.openbsd.org/pub/OpenBSD/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9115/"
]
} |
281,375 | I have observed that between repeated boots of the same system, the mapping between the device names /dev/sda , /dev/sdb/ ... and the physical hard drives stays the same. However I am not sure if it remains the same if I plug the hard drives into different sockets on the motherboard or if I add/remove drives. What guarantees does Linux make regarding the mapping of device names to the physical hard drives? Which rules does it use to map physical hard drives to files in /dev/? | The drive names are (on a typical Linux system) decided by the kernel (as the device must first be detected there), and may later be modified by udev. How it decides which hardware maps to which block special file is an implementation detail that will depend your udev configuration, kernel configuration, module setup, and many other things, too (including plain luck). The mapping of a device to a drive letter is not guaranteed to always be the same even with the same hardware and configuration (there are some systems which are particularly prone to swapping around device names due to race conditions, like those in parallel module loading). To answer the question you didn't ask, don't use /dev/sd* as an identifier for anything unless you are sure about the device you're mounting beforehand (for example, you are manually mounting after checking with fdisk and/or blkid ). Instead, use filesystem labels, filesystem UUIDs, or disk IDs to ensure you are pointing to the correct device, partition, or filesystem by its properties, instead of its detection order. You can find the disk IDs in /dev/disk/by-id , which is a convienient place to mount from, and guarantees that you're always using the same disk. To find which disk IDs you can use for the partition currently on /dev/sda1 , for example, you can use find : $ find -L /dev/disk/by-id -samefile /dev/sda1/dev/disk/by-id/wwn-0x5000cca22dd9fc29-part1/dev/disk/by-id/ata-HGST_HUS724020ALA640_PN1181P6HV51ZW-part1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29428/"
]
} |
281,378 | I do not know why, but after making a whole bunch of fish aliases. I'm assuming I have neglected one simple step after assigning them all but I cannot seem find the solution myself. Can anyone lend me a hand? Thank you very much! ~Ev | A fish alias is actually implemented as a function. To save a function, you need funcsave . So this is the sequence alias foo=barfuncsave foo That creates ~/.config/fish/functions/foo.fish which will then be available in any fish session. The alias command also works without the = : alias foo "bar baz qux" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47309/"
]
} |
281,398 | I'm playing with stat command which basically shows the inode information. Though I'm showing the information of a small file (146 characters), it shows 8 blocks. I was wondering why is that? Since the size of a page should be 4KB, which I expect the number is 1. BTW the file system I'm using is ext4. To give you more details: more tmp.sh #DATE=$(date +"%Y%m%d_%H%M%S")#cp /var/log/filter.log /var/log/logHistory/filter_{$DATE}.logdd=$(date --date='-1 day' +"%Y%m%d")rm filter_$dd* stat tmp.sh File: ‘tmp.sh’ Size: 146 Blocks: 8 IO Block: 4096 regular fileDevice: 801h/2049d Inode: 1835522 Links: 1Access: (0664/-rw-rw-r--) Uid: ( 1000/timestring) Gid: ( 1000/timestring)Access: 2016-05-05 17:34:08.251864800 -0700Modify: 2015-01-22 20:40:18.971521274 -0800Change: 2015-01-22 20:40:18.975521274 -0800 Birth: - | The "blocks" that stat() reports are 512 byte units. The normal block size used by ext4 is 4kb, or 8 of these "blocks". That means that the space used by a file on ext4 must be an integer multiple of 8 "blocks", and so the smallest size used by any file less than or equal to 4096 bytes in size is 8 512 byte blocks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281398",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154798/"
]
} |
281,409 | Hello I am developing an web with python Flask in a linux server, in doing so I am trying to use pdfkit and wkhtmltopdf . I am using a linux server(ubuntu). In Putty, after logging into my server, at root@myname:~# I downloaded pdfkit using apt-get , and downloaded wkhtmltopdf . And I go to python by typing python on the command. And I am trying to convert a url into a pdf file by typing in python; import pdfkitpdfkit.from_url('sample url','output.pdf') here I got an error saying: IOError: wkhtmltopdf exited with non-zero code -6. error:QXcbConnection: Could not connect to display. What went wrong? installing was a problem? or which part? And also, if it works correctly, where can I find the output file? which directory? I am using WinSCP to manage files. | The "blocks" that stat() reports are 512 byte units. The normal block size used by ext4 is 4kb, or 8 of these "blocks". That means that the space used by a file on ext4 must be an integer multiple of 8 "blocks", and so the smallest size used by any file less than or equal to 4096 bytes in size is 8 512 byte blocks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169154/"
]
} |
281,422 | How do I double each line of input piped in? Example: echo "foobar" | MYSTERY_COMMAND foobar foobar | Just use sed p . echo foobar | sed p You don't need cat , either: sed p input.txt# orsed p input.txt > output.txt Explanation p is the sed command for "print." Print is also sed 's default action. So when you tell sed explicitly to print, the result is that it prints every line twice . Let's say you wanted to only print lines that include the word "kumquat." You can use -n to supress sed 's default action of printing, and then tell it explicitly to print lines that match /kumquat/ : sed -n /kumquat/p Or if you only want to print the 5th line and nothing else: sed -n 5p Or, if you want to print every line from the 15th line to the 27th line: sed -n 15,17p If you want to print every line except lines which contain "kumquat," you could do this by just deleting all the kumquat lines, and letting sed 's default action of printing the line take place on non-deleted lines. You wouldn't need the -n flag or an explicit p command: sed /kumquat/d sed works on a simple pattern—action syntax. In the above examples, I've shown line-number-based patterns and regex-based patterns, and just two actions (print and delete). sed has a lot more power than that. I should really include the most common and useful sed command there is: sed s/apples/oranges/g This replaces every instance of "apples" with "oranges" and prints the result. (If you omit the g lobal flag to the s ubstitute command, only the first instance on every line will be changed.) Further reading (highly recommended): Sed - An Introduction and Tutorial | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136912/"
]
} |
281,472 | File, TABLE1 ------- 1234TABLE1 ------- 9555 TABLE1 ------- 87676 TABLE1------- 2344 I want the output like TABLE1 ------- 12349555 876762344 | Here is one liner, using sed and awk : sed '/^$/d' filename | awk '!a[$1]++' Combination of grep and awk : grep . filename | awk '!a[$1]++' As @ cas suggested, You can do that in single awk command also. awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
281,479 | I have a bash script I use to run a few python and C++ programs in a sequence. Each program takes in some input parameters which I define in the bash script. So as an example I run the program like this: echo $param1 $param2 $param3 | python foo.py The python program outputs some values which we use as input for later programs. The thing, like I said above, is that I do not need to run the python program if I store the values in some file and read them from there. So my question then is. Is there some generic tool that achieves this feature? That is, is there some program called 'bar' which I could run like bar $param1 $param2 $param3 "python foo.py" which would check if a cache file exists, if yes it would check if the program has been run with the given parameters and if yes it would output the cached output values instead of running the program again. Edit : The name of the log file could also be provided as input of course. | I just got nerd-sniped into writing a rather complete script for this; latest version is at https://gist.github.com/akorn/51ee2fe7d36fa139723c851d87e56096 . Advantages over sivann's shell implementation: also takes envvars into account when computing cache key; completely avoids race conditions using locking instead of relying on a random wait better performance when called in tight loop due to fewer forks also caches stderr completely transparent: prints nothing; doesn't prevent parallel execution of same command; just runs command uncached if there is a problem with the cache configurable via envvars and command line switches can prune its cache (remove all obsolete entries) Disadvantage: written in zsh, not bash. #!/bin/zsh## Purpose: run speficied command with specified arguments and cache result. If cache is fresh enough, don't run command again but return cached output.# Also cache exit status and stderr.# Copyright (c) 2019-2020 András Korn; License: GPLv3# Use silly long variable names to avoid clashing with whatever the invoked program might useRUNCACHED_MAX_AGE=${RUNCACHED_MAX_AGE:-300}RUNCACHED_IGNORE_ENV=${RUNCACHED_IGNORE_ENV:-0}RUNCACHED_IGNORE_PWD=${RUNCACHED_IGNORE_PWD:-0}[[ -n "$HOME" ]] && RUNCACHED_CACHE_DIR=${RUNCACHED_CACHE_DIR:-$HOME/.runcached}RUNCACHED_CACHE_DIR=${RUNCACHED_CACHE_DIR:-/var/cache/runcached}function usage() { echo "Usage: runcached [--ttl <max cache age>] [--cache-dir <cache directory>]" echo " [--ignore-env] [--ignore-pwd] [--help] [--prune-cache]" echo " [--] command [arg1 [arg2 ...]]" echo echo "Run 'command' with the specified args and cache stdout, stderr and exit" echo "status. If you run the same command again and the cache is fresh, cached" echo "data is returned and the command is not actually run." echo echo "Normally, all exported environment variables as well as the current working" echo "directory are included in the cache key. The --ignore options disable this." echo "The OLDPWD variable is always ignored." echo echo "--prune-cache deletes all cache entries older than the maximum age. There is" echo "no other mechanism to prevent the cache growing without bounds." echo echo "The default cache directory is ${RUNCACHED_CACHE_DIR}." echo "Maximum cache age defaults to ${RUNCACHED_MAX_AGE}." echo echo "CAVEATS:" echo echo "Side effects of 'command' are obviously not cached." echo echo "There is no cache invalidation logic except cache age (specified in seconds)." echo echo "If the cache can't be created, the command is run uncached." echo echo "This script is always silent; any output comes from the invoked command. You" echo "may thus not notice errors creating the cache and such." echo echo "stdout and stderr streams are saved separately. When both are written to a" echo "terminal from cache, they will almost certainly be interleaved differently" echo "than originally. Ordering of messages within the two streams is preserved." exit 0}while [[ -n "$1" ]]; do case "$1" in --ttl) RUNCACHED_MAX_AGE="$2"; shift 2;; --cache-dir) RUNCACHED_CACHE_DIR="$2"; shift 2;; --ignore-env) RUNCACHED_IGNORE_ENV=1; shift;; --ignore-pwd) RUNCACHED_IGNORE_PWD=1; shift;; --prune-cache) RUNCACHED_PRUNE=1; shift;; --help) usage;; --) shift; break;; *) break;; esacdonezmodload zsh/datetimezmodload zsh/statzmodload zsh/systemzmodload zsh/files# the built-in mv doesn't fall back to copy if renaming fails due to EXDEV;# since the cache directory is likely on a different fs than the tmp# directory, this is an important limitation, so we use /bin/mv insteaddisable mv mkdir -p "$RUNCACHED_CACHE_DIR" >/dev/null 2>/dev/null((RUNCACHED_PRUNE)) && find "$RUNCACHED_CACHE_DIR/." -maxdepth 1 -type f \! -newermt @$[EPOCHSECONDS-RUNCACHED_MAX_AGE] -delete 2>/dev/null[[ -n "$@" ]] || exit 0 # if no command specified, exit silently( # Almost(?) nothing uses OLDPWD, but taking it into account potentially reduces cache efficency. # Thus, we ignore it for the purpose of coming up with a cache key. unset OLDPWD ((RUNCACHED_IGNORE_PWD)) && unset PWD ((RUNCACHED_IGNORE_ENV)) || env echo -E "$@") | md5sum | read RUNCACHED_CACHE_KEY RUNCACHED__crap__# make the cache dir hashed unless a cache file already exists (created by a previous version that didn't use hashed dirs)if ! [[ -f $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.exitstatus ]]; then RUNCACHED_CACHE_KEY=$RUNCACHED_CACHE_KEY[1,2]/$RUNCACHED_CACHE_KEY[3,4]/$RUNCACHED_CACHE_KEY[5,$] mkdir -p "$RUNCACHED_CACHE_DIR/${RUNCACHED_CACHE_KEY:h}" >/dev/null 2>/dev/nullfi# If we can't obtain a lock, we want to run uncached; otherwise# 'runcached' wouldn't be transparent because it would prevent# parallel execution of several instances of the same command.# Locking is necessary to avoid races between the mv(1) command# below replacing stderr with a newer version and another instance# of runcached using a newer stdout with the older stderr.: >>$RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.lock 2>/dev/nullif zsystem flock -t 0 $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.lock 2>/dev/null; then if [[ -f $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.stdout ]]; then if [[ $[EPOCHSECONDS-$(zstat +mtime $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.stdout)] -le $RUNCACHED_MAX_AGE ]]; then cat $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.stdout & cat $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.stderr >&2 & wait exit $(<$RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.exitstatus) else rm -f $RUNCACHED_CACHE_DIR/$RUNCACHED_CACHE_KEY.{stdout,stderr,exitstatus} 2>/dev/null fi fi # only reached if cache didn't exist or was too old if [[ -d $RUNCACHED_CACHE_DIR/. ]]; then RUNCACHED_tempdir=$(mktemp -d 2>/dev/null) if [[ -d $RUNCACHED_tempdir/. ]]; then $@ >&1 >$RUNCACHED_tempdir/${RUNCACHED_CACHE_KEY:t}.stdout 2>&2 2>$RUNCACHED_tempdir/${RUNCACHED_CACHE_KEY:t}.stderr RUNCACHED_ret=$? echo $RUNCACHED_ret >$RUNCACHED_tempdir/${RUNCACHED_CACHE_KEY:t}.exitstatus 2>/dev/null mv $RUNCACHED_tempdir/${RUNCACHED_CACHE_KEY:t}.{stdout,stderr,exitstatus} $RUNCACHED_CACHE_DIR/${RUNCACHED_CACHE_KEY:h} 2>/dev/null rmdir $RUNCACHED_tempdir 2>/dev/null exit $RUNCACHED_ret fi fifi# only reached if cache not created successfully or lock couldn't be obtainedexec $@ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140748/"
]
} |
281,492 | File: TABLE11234 9555 87676 2344 Expected output: Description of the following table:TABLE112349555876762344 | With sed : $ sed -e '1i\Description of the following table:' <fileDescription of the following table:TABLE112349555876762344 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
281,522 | I have a script that needs to run without any user interaction and that needs to set up some SSH port forwarding. It shouldn't prompt for a password - if SSH needs a password to proceed, I'd like my script to error out instead. Is there any way to get SSH to tell me if my SSH agent is set up correctly to connect to a particular host before attempting the connection? | ssh has a "BatchMode" option which will make it simply fail rather than asking for a password. This seems much easier than having your script try to predict whether it will ask for a password. An answer to a similar question on SuperUser points out that merely disabling PasswordAuthentication won't always work because there are multiple different interactive authentication types, and it also doesn't look like it'll stop it from prompting for a passphrase on a key. You can use ssh -o BatchMode=yes , or put it in your ssh_config for the host. Also, ssh won't prompt for a password if there is no controlling terminal (it gets an error " read_passphrase: can't open /dev/tty: No such device or address ") - you can start a process with no controlling tty with setsid , which you should ideally do when you start Jenkins rather than doing it specifically for the ssh command (this will also prevent sudo from prompting for a password, etc) Note that if you run a process with setsid it will automatically run in the background (since the shell can't work with job control with processes in different sessions), so you need to be prepared for this by redirecting its stdout/stderr to a log file (and stdin to /dev/null). You can get strange results if a program run in setsid tries to read anything at all from standard input, since the usual mechanisms for preventing a background process from reading the terminal don't work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169220/"
]
} |
281,535 | I blocked an abusive IP from a CentOS server using iptables , dropping all connection attempts on all services / ports. As is the way of things, the server with this IP may have been part of a botnet, and may have been cleaned in the time since I blocked it. I would like to find out if it's still trying to attack the server, so I can decide whether to unblock the IP... without unblocking it first. I have tried searching through /var/log for anything that looks like iptables , grepped /var/log/secure for the offending IP, but have turned up nothing. Is there a log of dropped connection attempts for iptables , or a way to configure the rule to log attempts but still drop them? | In addition to the other answers, iptables -v -L lists the counts of packets and bytes that traverse a given rule, so you can see how much traffic you're dropping, and I wouldn't be too hard to write a tool that parses and reports that info. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22406/"
]
} |
281,543 | For some reason sed has always intimidated me, its syntax is a little odd I suppose. I want to make sure I understand what the following will do. sed -i.bak '/^x /d' "$SOME_FILE" This will first make a back up of $SOME_FILE at ${SOME_FILE}.bak ( -i.bak ). Each line in $SOME_FILE matching the regular expression " ^x " (meaning lines starting with an ' x ' followed by a space) will be deleted. | Yes, your understanding is correct. From your format, I assume you are using GNU sed (other implementations might need a space between the -i and the .bak and some might not support -i at all). Its -i works as follows (from info sed ): -i[SUFFIX]--in-place[=SUFFIX] This option specifies that files are to be edited in-place. GNU `sed' does this by creating a temporary file and sending output to this file rather than to the standard output.(1). This option implies `-s'. When the end of the file is reached, the temporary file is renamed to the output file's original name. The extension, if supplied, is used to modify the name of the old file before renaming the temporary file, thereby making a backup copy(2)). This rule is followed: if the extension doesn't contain a `*', then it is appended to the end of the current filename as a suffix; if the extension does contain one or more `*' characters, then _each_ asterisk is replaced with the current filename. This allows you to add a prefix to the backup file, instead of (or in addition to) a suffix, or even to place backup copies of the original files into another directory (provided the directory already exists). If no extension is supplied, the original file is overwritten without making a backup. The d command deletes any line on which the previous expression was successful. Strictly speaking , it deletes the "pattern space", but in simple sed scripts that is the line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121116/"
]
} |
281,589 | I am creating an empty file... dd if=/dev/zero of=${SDCARD} bs=1 count=0 seek=$(expr 1024 \* ${SDCARD_SIZE}) ...then turning it into an drive image... parted -s ${SDCARD} mklabel msdos ...and creating partitions on it parted -s ${SDCARD} unit KiB mkpart primary fat32 ${IMAGE_ROOTFS_ALIGNMENT} $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED})parted -s ${SDCARD} unit KiB mkpart primary $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED}) $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED} \+ $ROOTFS_SIZE) How do I use mkfs.ext and mkfs.vfat without mounting this image? | You want to format a partition in a disk-image file, rather than the entire image file. In that case, you need to use losetup to tell linux to use the image file as a loopback device. NOTE: losetup requires root privileges, so must be run as root or with sudo. The /dev/loop* devices it uses/creates also require root privs to access and use. e.g (as root) # losetup /dev/loop0 ./sdcard.img# fdisk -l /dev/loop0Disk /dev/loop0: 1 MiB, 1048576 bytes, 2048 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x54c246abDevice Boot Start End Sectors Size Id Type/dev/loop0p1 1 1023 1023 511.5K c W95 FAT32 (LBA)/dev/loop0p2 1024 2047 1024 512K 83 Linux# file -s /dev/loop0p1/dev/loop0p1: data# mkfs.vfat /dev/loop0p1 mkfs.fat 3.0.28 (2015-05-16)Loop device does not match a floppy size, using default hd params# file -s /dev/loop0p1/dev/loop0p1: DOS/MBR boot sector, code offset 0x3c+2, OEM-ID "mkfs.fat", sectors/cluster 4, root entries 512, sectors 1023 (volumes <=32 MB) , Media descriptor 0xf8, sectors/FAT 1, sectors/track 32, heads 64, serial number 0xfa9e3726, unlabeled, FAT (12 bit) and, finally, detach the image from the loopback device: # losetup -d /dev/loop0 See man losetup for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87529/"
]
} |
281,615 | The easiest/simplest understanding of the web is a. When you connect to your ISP, the ISP gives a dyanmic address (like a temporary telephone number) only for the duration of that connection, the next time you connect, you will again have a different dynamic IP Address. b. You use the browser to to different sites which have static IP Address (like permanent numbers or/and permanent address of an establishment). Now is there a way to get self's IP address instead of going to a web-service like whatismyipaddress.com. The connection is as follows :- ISP - Modem/Router - System Edit - The Modem/Router is a D-Link DSL-2750U ADSL router/modem. http://www.dlink.co.in/products/?pid=452 I did see How to track my public IP address in a log file? but that also uses an external web-service, it would be better/nicer if we could do without going to an exernal URL/IP address for the same. | In addition to Tony´s answer, of querying OpenDNS, which I use in my scripts upon logging on to my servers to display both the local machine and remote public IP address: echo `hostname` `hostname -i` `dig +short +time=1 myip.opendns.com @resolver1.opendns.com` Google also offers a similar service. dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | awk -F'"' '{ print $2}' If you have a private IP address, behind a home or corporate router/infra-structure, or even if you are your own router, these services in the Internet will reveal the public IP address you are using to reach them, as it is what arrives to them doing the request. Please do note that the above methods only work if the Linux machine in question has direct access to the Internet . If your Linux server is your router, besides you being able to have a look at your current interfaces, you might also do: hostname -i As normally the public IP address is often the main/first interface. If not the first interface, you might also do: $hostname -I95.xx.xx.xxx 192.168.202.1 192.168.201.1 Which shows you all the IP addresses of the machine interfaces. Please read too: How To Find My Public IP Address From Command Line On a Linux Again, if the Linux server is the router, it might be interesting to place a script in /etc/dhcp/dhclient-exit-hooks.d to track and act on your IP changes, as I documented in this question: Better method for acting on IP address change from the ISP? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
281,621 | Systems often have multiple versions of binaries and the one selected depends on the priority in $PATH . For instance, the system I am using has a couple version of sort: $ which sort~/coreutils-8.25/bin/sort$ ~/coreutils-8.25/bin/sort --version | head -n 1sort (GNU coreutils) 8.25$ /bin/sort --version | head -n 1sort (GNU coreutils) 8.4 On the system I am using, the version from GNU coreutils 8.25 is selected by an invocation of sort because of its precedence in PATH . However, the MANPATH environment variable on the system has been established such that the man page for the sort from GNU coreutils 8.4 is displayed (i.e., for /bin/sort , which is not the binary having precedence). A three-part question arises from this scenario. First, is there a simple way to instruct man (or the shell) to use or produce a form of MANPATH that reflects PATH , or must one do this manually (i.e., by finding the paths to the man pages that are associated with each entry in PATH and then concatenating these man paths in the same order as PATH , an exercise that would have to be repeated any time a change is made to PATH )? Were there a mechanism to establish concordance between PATH and MANPATH , then the expected man page would be displayed automatically, avoiding the problem of inadvertently reading a man page for a version other than the one used by default. Second, is there a command that allows one to quickly determine the path of the default man page (e.g., something akin to which "man sort" , which would report the path of the man page that is displayed when executing man sort ). For instance, when I type man sort , I have no indication of the specific file on the system that is being delivered to the pager. Third, is there a way to obtain the man page for an explicit version of a command (something like man ~/coreutils-8.25/bin/sort in my case for the GNU coreutils 8.25 version of sort, rather than having to track down the associated file, which in my case can be found to be ~/coreutils-8.25/share/man/man1/sort.1 or ~/coreutils-8.25/man/sort.1). | In addition to Tony´s answer, of querying OpenDNS, which I use in my scripts upon logging on to my servers to display both the local machine and remote public IP address: echo `hostname` `hostname -i` `dig +short +time=1 myip.opendns.com @resolver1.opendns.com` Google also offers a similar service. dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | awk -F'"' '{ print $2}' If you have a private IP address, behind a home or corporate router/infra-structure, or even if you are your own router, these services in the Internet will reveal the public IP address you are using to reach them, as it is what arrives to them doing the request. Please do note that the above methods only work if the Linux machine in question has direct access to the Internet . If your Linux server is your router, besides you being able to have a look at your current interfaces, you might also do: hostname -i As normally the public IP address is often the main/first interface. If not the first interface, you might also do: $hostname -I95.xx.xx.xxx 192.168.202.1 192.168.201.1 Which shows you all the IP addresses of the machine interfaces. Please read too: How To Find My Public IP Address From Command Line On a Linux Again, if the Linux server is the router, it might be interesting to place a script in /etc/dhcp/dhclient-exit-hooks.d to track and act on your IP changes, as I documented in this question: Better method for acting on IP address change from the ISP? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14960/"
]
} |
281,659 | I have a shell script in which I am running perl script by below code. perl perlscript.pl In the perl script I have defined a variable called $circle . Now I want to use this variable value in my shell script. How can I call? | If your perl script produces no other output than the value of $circle, you can use command substitution to store that output in a variable. For example: circle=$(perl perlscript.pl) If the perl script produces other output as well (or not output at all), you'll have to either: extract only the value you want from the output using the usual text processing tools ( sed , awk , perl , grep , etc). Here's a very simple example: circle=$(perl perlscript.pl | sed -e 's/junk.i.dont.want//') use an indirect method, such as having the perl script write the value of $circle to a file (e.g. /path/to/circle ) for your shell to read it (e.g. circle=$(cat /path/to/circle) ) NOTE: Without more details from you, it's impossible to provide more than generic advice like this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125503/"
]
} |
281,661 | After window resizing, font size changes, etc., how can I easily and quickly check what is the current display width of my terminal? | This has been answered (and mis-answered) repeatedly. But: tput cols provides information that the operating system can tell you about the width. the COLUMNS variable may be set by your shell, but (a) it is unreliable (set in certain shells) and has the drawback that if exported will interfere with full-screen applications. the resize program can tell you the size for special cases where the terminal cannot negotiate its window-size with the operating system. Further reading: COLUMNS in the ncurses manual page. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
281,674 | I need devtools because I need the function install_github to install the non-CRAN package digitize here .I installed R by sudo apt-get install r-cran-robustbase I did not install R's packages right away, as terdon's answer proposes, but I could correct the permissions: sudo chmod 755 /usr/lib/R/site-library/ which I think is the default setting. I then had to do as rcs' answer proposes, to successfully install devtools and tpoisot/digitize but only with sudo apt-get install libssl-devsudo apt-get install libcurl4-openssl-devRinstall.packages('httr')install.packages('git2r')install.packages('devtools')library(devtools)install.packages('readbitmap')install_github('tpoisot/digitize') The output of the last command can be Skipping install for github remote, the SHA1 (d16e28b9) has not changed since last install. Use `force = TRUE` to force installation Do install_github('tpoisot/digitize', force = TRUE) but you may get ...'/usr/lib/R/bin/R' --no-site-file --no-environ --no-save --no-restore --quiet \ CMD INSTALL '/tmp/RtmpX8eOLX/devtools57475d25a113/tpoisot-digitize-d16e28b' \ --library='/usr/local/lib/R/site-library' --install-tests Error: ERROR: no permission to install to directory ‘/usr/local/lib/R/site-library’Error: Command failed (1) I could not find a way to install digitize without sudo . So do sudo R , and repeat the same and you get ...'/usr/lib/R/bin/R' --no-site-file --no-environ --no-save --no-restore --quiet \ CMD INSTALL '/tmp/RtmpAlAT4e/devtools57e864e8c490/tpoisot-digitize-d16e28b' \ --library='/usr/local/lib/R/site-library' --install-tests * installing *source* package ‘digitize’ ...** R** inst** preparing package for lazy loading** help*** installing help indices** building package indices** testing if installed package can be loaded* DONE (digitize) Add masi to the existing group staff to work without sudo in R ; which you need also in a fresh installation sudo usermod -a -G staff masi Tests of the installation I follow the guide here . I start R in $HOME/Pictures/ without sudo and use their test image here . Select four points in the axes with mouse cal = digitize::ReadAndCal('Rintro-snail1.jpg') Do data.points = digitize::DigitData(col = 'red') and choose manually points which are your data points I close the Plot window by doing second-click. Do df = digitize::Calibrate(data.points, cal, 0.1, 0.4, 0.0, 0.6) and seeing df x y1 71.50 NA2 65.65 NA...24 26.80 NA Doing head(df) x y1 71.50 NA2 65.65 NA3 64.60 NA4 60.85 NA5 59.05 NA6 58.15 NA Installation Details In R and without sudo > .Library[1] "/usr/lib/R/library"> > .libPaths()[1] "/usr/local/lib/R/site-library" "/usr/lib/R/site-library" [3] "/usr/lib/R/library" Command ls /usr/lib/R/library/ which does not list devtools . Why? base compiler grid methods rpart survivalboot datasets KernSmooth mgcv spatial tcltkclass foreign lattice nlme splines toolscluster graphics MASS nnet stats translationscodetools grDevices Matrix parallel stats4 utils Command ls -la /usr/local/lib/R/ total 12drwxrwsr-x 3 root staff 4096 touko 19 22:25 .drwxr-xr-x 5 root root 4096 touko 19 22:25 ..drwxrwsr-x 2 root staff 4096 touko 19 22:25 site-library Command ls -la /usr/local/lib/ total 20drwxr-xr-x 5 root root 4096 touko 19 22:25 .drwxr-xr-x 14 root root 4096 touko 19 22:13 ..drwxrwsr-x 4 root staff 4096 huhti 21 01:13 python2.7drwxrwsr-x 3 root staff 4096 huhti 21 01:08 python3.5drwxrwsr-x 3 root staff 4096 touko 19 22:25 R Command R_LIBS_USER="/usr/local/lib/R/site-library/" R R version 3.2.3 (2015-12-10) -- "Wooden Christmas-Tree" Copyright (C) 2015 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) ... library(devtools) gets loaded Differential tools This project is more popular and can work better https://github.com/markummitchell/engauge-digitizer Reasons for previous bugs No clean system: systems which were upgraded from 14.04, 15.10, etc. Messed up permissions/owners because of the previous thing. Own mistakes in the process. No backups in the case of failure. ... missing docs System: Ubuntu 16.04 64 bit in a clean installation Hardware: Dell PC 2013, Macbook Air 2013-mid, ... | httr imports the openssl package which needs as system requirement libssl-dev ( sudo apt install libssl-dev ) ------------------------- ANTICONF ERROR ---------------------------Configuration failed because openssl was not found. Try installing: * deb: libssl-dev (Debian, Ubuntu, etc)... The curl package needs as system requirement libcurl4-openssl-dev : ------------------------- ANTICONF ERROR ---------------------------Configuration failed because libcurl was not found. Try installing: * deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)... So, to install you will need to run: sudo apt-get install libssl-devsudo apt-get install libcurl4-openssl-dev Then start an R shell with sudo R and: install.packages('httr')install.packages('git2r')install.packages('devtools')library(devtools)install_github('tpoisot/digitize') | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
281,684 | I have an nginx -daemon running on a Debian (8.3). The nginx process occasionally runs into resource limitations when trying to write log files: too many open files . The nginx master process is executed with root, while each of the four worker processes is executed with www-data user permissions. When I checked for the nginx-master and each worker process limit configuration I discovered something odd. cat /proc/{nginx-master-process-id}/limitsLimit Soft Limit Hard Limit Units…Max open files 1024 4096 files…cat /proc/{nginx-any-worker-process-id}/limits…Max open files 30000 30000 files… Each nginx worker is allowed to open 30000 files. The nginx master process though is only allowed to open 1024 files, respectively 4096 files regarding the hard limit. When I check for the root user ulimit settings I see no such limit defined! Where does this 1024/4096 setting may come from? root ulimit settings # logged in as rootulimit -Hunlimited Additionally I checked the daemon config: /lib/systemd/system/nginx.service [Unit]Description=A high performance web server and a reverse proxy serverAfter=network.target[Service]Type=forkingPIDFile=/run/nginx.pidExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reloadExecStop=-/sbin/start-stop-daemon --quiet --stop --signal QUIT --retry QUIT/5 --pidfile /run/nginx.pid# Give Passenger a chance to clean up before being killed by systemd.ExecStop=/bin/sleep 1TimeoutStopSec=5KillMode=mixed[Install]WantedBy=multi-user.target I see no ulimit configuration here, either. What places else can I check to modify the 1024/4096 nofile limit for the nginx -master process? | It seems the problem , so to speak, was my wrong assumption that a systemd service respects the ulimit configured in /etc/security/limits.conf . As it turns out a daemon configured via systemd does intentionally ignore settings in the limits.conf and requires a LimitNOFILE configuration in the service configuration file. Updating my systemd unit file fixed the problem: /lib/systemd/system/nginx.service [Unit]Description=A high performance web server and a reverse proxy serverAfter=network.target[Service]Type=forkingPIDFile=/run/nginx.pidExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reloadExecStop=-/sbin/start-stop-daemon --quiet --stop --signal QUIT --retry QUIT/5 --pidfile /run/nginx.pid# Give Passenger a chance to clean up before being killed by systemd.ExecStop=/bin/sleep 1TimeoutStopSec=5KillMode=mixedLimitNOFILE=30000 # <= This line was added[Install]WantedBy=multi-user.target Here are some links and resources regarding this issue: https://serverfault.com/questions/628610/increasing-nproc-for-processes-launched-by-systemd-on-centos-7 https://bugzilla.redhat.com/show_bug.cgi?id=754285 Thanks to @ijaz-khan for pointing me in that direction. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10916/"
]
} |
281,692 | My Arch based system directly boots to tty and i never use x system or gui. I want to change prompt or whatever which looks like [root@0 ~]# . I want to change it to current time 12 hours format and no am pm or second. It means [hh:mm]. And if it is in red then it would be fantastic. I tried some guides and changed it to [hh:mm:ss] by PS1="/@" but it goes away when i reboot. | It seems the problem , so to speak, was my wrong assumption that a systemd service respects the ulimit configured in /etc/security/limits.conf . As it turns out a daemon configured via systemd does intentionally ignore settings in the limits.conf and requires a LimitNOFILE configuration in the service configuration file. Updating my systemd unit file fixed the problem: /lib/systemd/system/nginx.service [Unit]Description=A high performance web server and a reverse proxy serverAfter=network.target[Service]Type=forkingPIDFile=/run/nginx.pidExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reloadExecStop=-/sbin/start-stop-daemon --quiet --stop --signal QUIT --retry QUIT/5 --pidfile /run/nginx.pid# Give Passenger a chance to clean up before being killed by systemd.ExecStop=/bin/sleep 1TimeoutStopSec=5KillMode=mixedLimitNOFILE=30000 # <= This line was added[Install]WantedBy=multi-user.target Here are some links and resources regarding this issue: https://serverfault.com/questions/628610/increasing-nproc-for-processes-launched-by-systemd-on-centos-7 https://bugzilla.redhat.com/show_bug.cgi?id=754285 Thanks to @ijaz-khan for pointing me in that direction. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169347/"
]
} |
281,739 | In my project I have the following snippet: local output="$(bash "${1##*/}")"echo "$?" This always prints zero due to local , however, removing local causes the $? variable to behave correctly: which is to assume the exit code from the subshell. My question is: how I can keep this variable local whilst also capturing the exit value? | #!/bin/bashthing() { local foo=$(asjkdh) ret="$?" echo "$ret"} This will echo 127 , the correct error code for "command not found". You can use local to define more than one variable. So I just also create the local variable RET to capture the exit code of the subshell before local succeeds and sets $? to zero. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/281739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77362/"
]
} |
281,774 | I installed supervisor on ubuntu server 16.04. $ sudo apt-get install supervisor$ sudo update-rc.d supervisor defaults After rebooting, supervisor didn't get started automatically. Checked the status: qinking126@nas:~$ sudo service supervisor status[sudo] password for qinking126:● supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: http://supervisord.org I'm not sure why it's inactive (dead). What do I need to check to get it fixed? | I am convinced that this issue is a packaging bug in the Supervisor package in Ubuntu 16.04 and it seems to have been caused by the switch to systemd: This issue was already reported upstream on the Supervisor project's issue tracker (where nothing can be fixed) in issue 735 . I was bitten by this issue a few days ago and was astonished to find that this issue was never reported to the package maintainers, even though Ubuntu 16.04 was released quite a while ago and this breaks backwards compatibility and expected behavior. This is why I decided to report this issue to the package maintainers in bug 1594740 . I documented a simple workaround in bug 1594740 that doesn't require any configuration files to be created - you just need to enable and start the Supervisor daemon after installation of the package: # Make sure Supervisor comes up after a reboot.sudo systemctl enable supervisor# Bring Supervisor up right now.sudo systemctl start supervisor I'm not so sure that this will be fixed in Ubuntu 16.04 but at least now there's a central place to gather complaints and document workarounds (in bug 1594740 , not in issue 735 ). If anyone was bitten by this issue, consider voicing your concern in bug 1594740 to convince the package maintainers to fix this issue. Thanks! Update (2017-03-24): Yesterday a fix for this issue was released to xenial-updates as a result of bug 1594740 so new installations should no longer run into this issue. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169392/"
]
} |
281,784 | I want to replace spaces (i.e., " ") or new lines (i.e., carriage return) with underscores in a special case - when they occur between two specific strings. I have html pages and I want to replace the blank spaces and new lines with underscores when they occur between two specific strings. Example: lots of text...page_5.html months agoThis is the password: 6743412 <http://website.com etc...more text... I want to to go from above to below: lots of text...page_5.html months ago__This_is_the_password:_6743412_<http://website.com etc...more text... Basically, I want to do the replacement only between the strings ago and <http It is repetitive html so if I can get this to work it would be very helpful and easy to extract the modified text later. Something using sed or awk would be best for me. | I am convinced that this issue is a packaging bug in the Supervisor package in Ubuntu 16.04 and it seems to have been caused by the switch to systemd: This issue was already reported upstream on the Supervisor project's issue tracker (where nothing can be fixed) in issue 735 . I was bitten by this issue a few days ago and was astonished to find that this issue was never reported to the package maintainers, even though Ubuntu 16.04 was released quite a while ago and this breaks backwards compatibility and expected behavior. This is why I decided to report this issue to the package maintainers in bug 1594740 . I documented a simple workaround in bug 1594740 that doesn't require any configuration files to be created - you just need to enable and start the Supervisor daemon after installation of the package: # Make sure Supervisor comes up after a reboot.sudo systemctl enable supervisor# Bring Supervisor up right now.sudo systemctl start supervisor I'm not so sure that this will be fixed in Ubuntu 16.04 but at least now there's a central place to gather complaints and document workarounds (in bug 1594740 , not in issue 735 ). If anyone was bitten by this issue, consider voicing your concern in bug 1594740 to convince the package maintainers to fix this issue. Thanks! Update (2017-03-24): Yesterday a fix for this issue was released to xenial-updates as a result of bug 1594740 so new installations should no longer run into this issue. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148402/"
]
} |
281,794 | I want to rename all the files in a folder with PREFIX+COUNTER+FILENAME for ex. input: england.txt canada.txt france.txt output: CO_01_england.txt CO_02_canada.txt CO_03_france.txt | This does what you ask: n=1; for f in *.txt; do mv "$f" "CO_$((n++))_$f"; done How it works n=1 This initializes the variable n to 1. for f in *.txt; do This starts a loop over all files in the current directory whose names end with .txt . mv "$f" "CO_$((n++))_$f" This renames the files to have the CO_ prefix with n as the counter. The ++ symbol tells bash to increment the variable n . done This signals the end of the loop. Improvement This version uses printf which allows greater control over how the number will be formatted: n=1; for f in *.txt; do mv "$f" "$(printf "CO_%02i_%s" "$n" "$f")"; ((n++)); done In particular, the %02i format will put a leading zero before the number when n is still in single digits. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169407/"
]
} |
281,858 | I found three configuration files. .xinitrc .xsession .xsessionrc I know that the first one is for using startx and the second and third are used when using a display manager. But what is the difference between the last two? | ~/.xinitrc is executed by xinit , which is usually invoked via startx . This program is executed after logging in: first you log in on a text console, then you start the GUI with startx . The role of .xinitrc is to start the GUI part of the session, typically by setting some GUI-related settings such as key bindings (with xmodmap or xkbcomp ), X resources (with xrdb ), etc., and to launch a session manager or a window manager (possibly as part of a desktop environment). ~/.xsession is executed when you log in in graphical mode (on a display manager ) and the display manager invokes the “custom” session type. (With the historical display manager xdm, .xsession is always executed, but with modern display managers that give the user a choice of session type, you usually need to pick “custom” for .xsession to run.) Its role is both to set login-time parameters (such as environment variables) and to start the GUI session. A typical .xsession is #!/bin/sh. ~/.profile. ~/.xinitrc ~/.xsessionrc is executed on Debian (and derivatives such as Ubuntu, Linux Mint, etc.) by the X startup scripts on a GUI login, for all session types and (I think) from all display managers. It's also executed from startx if the user doesn't have a .xinitrc , because in that case startx falls back on the same session startup scripts that are used for GUI login. It's executed relatively early, after loading resources but before starting any program such as a key agent, a D-Bus daemon, etc. It typically sets variables that can be used by later startup scripts. It doesn't have any official documentation that I know of, you have to dig into the source to see what works. .xinitrc and .xsession are historical features of the X11 Window system so they should be available and have a similar behavior on all Unix systems. On the other hand, .xsessionrc is a Debian feature and distributions that are not based on Debian don't have it unless they've implemented something similar. .xprofile is very similar to .xsessionrc , but it's part of the session startup script some display managers including GDM (the GNOME display manager) and lightdm, but not others such as xdm and kdm. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/281858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147698/"
]
} |
281,880 | [gala@arch ~]$ sudo !!sudo hdparm -i /dev/sda/dev/sda: Model=KINGSTON SHFS37A120G, FwRev=603ABBF0, SerialNo=50026B725B0A1515 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=234441648 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=yes: unknown setting WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-2,3,4,5,6,7 * signifies the current active mode Where does hdparm read the Model field from? Somewhere from sysfs ? Where from? | # strace hdparm -i /dev/sda…ioctl(3, HDIO_GET_IDENTITY, 0x7fffa930c320) = 0brk(0) = 0x1c42000brk(0x1c63000) = 0x1c63000write(1, "\n", 1) = 1write(1, " Model=… So hdparm gets its information from the HDIO_GET_IDENTITY ioctl , not from sysfs. That doesn't mean that the information can't be accessed from sysfs, of course. Next we can look up HDIO_GET_IDENTITY in the kernel source. LXR is convenient for that. The relevant hit shows a call to ata_get_identity . This function looks up the model in the device description at the offset ATA_ID_PROD in the device description. Looking at where else ATA_ID_PROD is used, and with sysfs in mind, we find a hit in ide-sysfs.c , in a function called model_show . This function is referenced by the macro call just below DEVICE_ATTR_RO(model) , so if the ata driver is exposing the IDE interface, there's a file called model in the device's sysfs directory that contains this information. If the ata driver is exposing the SCSI interface, tracing the kernel source is a lot more complicated, because the code uses different ways of extracting the information from the hardware. But as it turns out there is also a model field in the device's sysfs directory. As for where the device's sysfs directory is, there are several ways to access it. The sysfs.txt file in the kernel documentation documents this, not very well. The simplest way to access it is via /sys/block which contains an entry for each block device: $ cat /sys/block/sda/device/model There are a lot of symbolic links in /sys . The “physical” location of that directory depends on how the disk is connected to the system; for example it has the form /sys/devices/pci…/…/ata…/host…/target…/… for an ATA device with a SCSI interface that's connected to a PCI bus. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164429/"
]
} |
281,938 | Accorging to GNU documentation: ‘\<’ Match the empty string at the beginning of word.‘\>’ Match the empty string at the end of word. My /etc/fstab looks like this: /dev/sdb1 /media/fresh ext2 defaults 0 0 I want grep to return TRUE/FALSE for the existence of /media/fresh. I tried to use \< and \> but it didn't work. Why? egrep '\</media/fresh\>' /etc/fstab Workaround: egrep '[[:blank:]]/media/fresh[[:blank:]]' /etc/fstab But it looks uglier. My grep is 2.5.1 | \< and \> match empty string at the begin and end of a word respectively and only word constituent characters are: [[:alnum:]_] From man grep : Word-constituent characters are letters, digits, and the underscore. So, your Regex is failing because / is not a valid word constituent character. Instead as you have spaces around, you can use -w option of grep to match a word: grep -wo '/media/fresh' /etc/fstab Example: $ grep -wo '/media/fresh' <<< '/dev/sdb1 /media/fresh ext2 defaults 0 0'/media/fresh | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/281938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11437/"
]
} |
281,991 | I have to set ACL on a multiple files. I have downloaded a list of object stored using below commands. C:\Users\Gshrivastava\Downloads\curl_748_0>curl -o urlname.csv -i -k -H "Authorization: HCP bXFl:29def7dbc8892a9389ebc7a5210dd844" -H "Content-Type: application/xml" -H "Accept:application/xml" -d @mqe.xml "http://tenant.hcp3.hdsblr.com/query?prettyprint I have then sorted url names into a text file . ns.tenant.hcp3.hdsblr.com/rest/pic/Cat/images.jpg ns.tenant.hcp3.hdsblr.com/rest/pic/Cat/6.png ns.tenant.hcp3.hdsblr.com/rest/pic/landscape/9.png ns.tenant.hcp3.hdsblr.com/rest/pic/landscape/5.png content of text file > Now I want to use this file as argument or variable so that all the filenames are set with ACL. curl.exe -k http://ns.tenant.hcp3.hdsblr.com/rest/ACL/filename.ext/?type=acl -i -H "Authorization: HCP YWRtaW4=:29def7dbc8892a9389ebc7a5210dd844" -T acl.xml | If I understand correctly, you have a file containing a list of URLs (one per line), and you want to pass those URLs to CURL. There are two main ways to do that: with xargs , or with command substitution . With xargs : xargs <urls.txt curl … With command substitution: curl … $(cat urls.txt) Both methods mangle some special characters, but given what characters are valid in URLs, this shouldn't be an issue, except that with xargs , single quotes ( ' ) need to be encoded as %27 . Alternatively, use xargs -l . Note that since this is a Unix site, I'm assuming that you're running a Unix variant and invoking these commands from a Unix shell such as bash. Given that you're running curl.exe , you appear to be using Windows. If you're going to use Unix tools, I recommend that you do so from a Unix shell such as bash or zsh; Windows does not come with xargs any more than it comes with curl , and cmd does not have command substitution (at least not in the same form). There is probably a way to do this with Windows tools, but I don't know what it is and it's off-topic here. Also, if you're using Unix tools under Windows, take care that your list of URLs uses Unix line endings (LF only), not Windows line endings (CR+LF). Unix tools expect a line to end with LF and treat CR as an ordinary character. For more information, see Directories are listed twice and many other questions on this site. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169519/"
]
} |
281,997 | There is a big file containing a pattern which is repeated periodically in the file, I want to extract just a specific pattern after certain values of occurrence as well as the next N lines. Here is an example but the numbers before members of the group are not really existing. input: 1 members of the group......2 members of the group.........n members of the group......... output: 85 members of the group............... (85th match and the next 5 lines) | Here's one way with awk : awk -vN=85 -vM=5 'BEGIN{c=0}/PATTERN/{c++{if (c==N) {l=NR;last=NR+M}}}{if (NR<=last && NR>=l) print}' infile Where N is the N th line matching PATTERN and M is the number of lines that follow. It sets a counter and when the N th line matching is encountered it saves the line number. It then prints the lines from the current NR up to NR + M . For the record, that's how you do it with sed ( gnu sed syntax): sed -nE '/PATTERN/{x;/\n{84}/{x;$!N;$!N;$!N;$!N;$!N;p;q};s/.*/&\n/;x}' infile This is using the hold space to count. Each time it encounters a line matching PATTERN it e x changes buffers and checks if there are N-1 occurrences of \n ewline character in the hold buffer. If the check is successful it e x changes again, pulls in the next M lines with the $!N command and p rints the pattern space then q uits. Otherwise it just adds another \n ewline char to the hold space and e x changes back. This solution is less convenient as it quickly becomes cumbersome when M is a big number and requires some printf -fu to build up a sed script (not to mention the pattern and hold space limits with some sed s). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/281997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112129/"
]
} |
282,055 | I have a tar that was generated on a linux machine. I need to upload part of that tar to another linux machine. The full tar is huge and will take hours to upload. I am now on a Mac OSX machine and this is my problem: I extract the tar to a folder and locate what I need to upload to the new server I create a smaller tar containing just what I want to upload. I upload and extract that to the new linux machine When I look the server it is full of ._ files. For every file uploaded there is a ._ file, like text1.txt , ._text1.txt , text2.txt , ._text2.txt ... OSX is including these files on the tar. I have tried to do this tar --exclude='._*' -cvf newTar . without difference. I do not have ssh access to the new server now. What can I do to solve that? How do I generate a clean tar. | To my understanding, tar --exclude='._*' -cvf newTar . should work: Finder creates the ._* files but newTar shouldn't contain them. But you can completely bypass those files by invoking tar in passthrough mode. For example, to copy only the files from oldTar that are under some/path , use tar -cf newTar --include='some/path/*' @oldTar | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/282055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
282,086 | I use the auto generated rules that come from OpenWRT as an example of NAT reflection (NAT loopback). So let's pretend there's a network 192.168.1.0/24 with two hosts (+ router): 192.168.1.100 and 192.168.1.200. The router has two interfaces LAN (br-lan) and WAN (eth0). The LAN interface has an IP 192.168.1.1 and the WAN interface has an IP 82.120.11.22 (public). There's a www server on 192.168.1.200. We want to connect from 192.168.1.100 to the web server using the public IP address. If you wanted to redirect WAN->LAN so people from the internet can visit the web server, you would add the following rules to iptables: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADEiptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200:80 I know what the rules mean. But there's also two other rules, which are responsible for NAT reflection. One of them isn't that clear to me as the ones above. So the first rule looks like this: iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200 And this means that all the traffic from the 192.168.1.0/24 network that is destined to the public IP to the port 80 should be sent to the local web server, which means that I type the public IP in firefox and I should get the page returned by the server, right? All the other forwarding magic in the filter table was already done, but I still can't connect to the web server using the public IP. The packet hit the rule, but nothing happens. We need another nat rule in order to make the whole mechanism work: iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1 I don't know why the rule is needed. Can anyone explain what exactly the rule does? | For a NAT to work properly both the packets from client to server and the packets from server to client must pass through the NAT. Note that the NAT table in iptables is only used for the first packet of a connection. Later packets related to the connection are processed using the internal mapping tables established when the first packet was translated. iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200 With just this rule in place the following happens. The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped. Since the client has no specific entries in its routing table it sends it to its default gateway. The default gateway is the NAT box. The NAT box receives the intial packet, modifies the destination IP, establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server. The source address remains unchanged. The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was unchanged the destination IP of the reply is the IP of the client. The Server looks up the IP in its routing table and sends the packet back to the client. The client rejects the packet because the source address doesn't match what it expects. iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1 Once we add this rule the sequence of events changes. The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped. Since the client has no specific entries in its routing tables it sends it to its default gateway. The default gateway is the NAT box. The NAT box receives the intial packet, following the entries in the NAT table it modifies the destination IP, source IP and possiblly source port (source port is only modified if needed to disambiguate), establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server. The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was modified by the NAT box the destination IP of the packet is the IP of the NAT box. The Server looks up the IP in its routing table and sends the packet back to the NAT box. The NAT box looks up the packet's details (source IP, source port, destination IP, destination port) in its NAT mapping tables and performs a reverse translation. This changes the source IP to the public IP, the source port to 80, the destination IP to the client's IP and the destination port back to whatever source port the client used. The NAT box looks up the new destination IP in its routing table and sends the packet back to the client. The client accepts the packet. Communication continues with the NAT translating packets back and forth. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/282086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52763/"
]
} |
282,107 | I am trying to install Perl from source (because my server isn't conected to internet), and while doing 'make install' .. it stops at: Can't locate DWIM.pm in @INC (you may need to install the DWIM module) (@INC contains: lib dist/Exporter/lib .).BEGIN failed--compilation aborted. Note: I have DWIM perl installed previously. Now when I do [root@ctl perl-5.22.2]# perl -e "print \"@INC\""/opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/site_perl/5.20.1/x86_64-linux /opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/site_perl/5.20.1 /opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/5.20.1/x86_64-linux /opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/5.20.1 and the DWIM file is located at [root@ctl perl-5.22.2]# find / -name DWIM.pm/opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/site_perl/5.20.1/DWIM.pm What I want is how can I modify the @INC generally in perl so that it finds DWIM.pm ? | You can do it by adding its path to the PERL5LIB environment variable : export PERL5LIB=/opt/dwimperl-linux-5.20.1-10-x86_64/perl/lib/site_perl/5.20.1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/282107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169599/"
]
} |
282,155 | My computer recently ran out of memory (a not-unexpected consequence of compiling software while working with large GIS datasets). In the system log detailing how it dealt with the OOM condition is the following line: Out of memory: Kill process 7429 (java) score 259 or sacrifice child What is that or sacrifice child about? Surely it isn't pondering some dark ritual to keep things going? | From source files I found oom_kill.c , the OOM Killer, after such message is written in system log, checks children of the process identified and evaluates if possible to kill one of them in place of the process itself. Here a comment extracted from source file explaining this: /* * If any of p's children has a different mm and is eligible for kill, * the one with the highest oom_badness() score is sacrificed for its * parent. This attempts to lose the minimal amount of work done while * still freeing memory. */ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/282155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62988/"
]
} |
282,190 | I have a server that only allows ssh with a key. However when I try to ssh to this machine I get the error "Permission denied (publickey)". From the auth.log on my server this is happening pre auth. The permissions on my keys and .ssh folder look fine. Also if I use -vv when attempting to ssh it doesn't appear to be attempting to use the correct key file. The only way this works is if I use the -i arg and specify the path to the key. (Which is in my .ssh folder) I've installed key before and had no problems. The only difference was that this time I had to scp the key file to the server and then use cat >> to add it to the authorised key file, rather than use ssh-copy-id. Does anyone know how I could debug this further or fix the issue rather than make me server unsecure for a while and then use ssh-copy-id? -vv output (the key that's valid doesn't even get tried for some reason.) OpenSSH_6.7p1 Raspbian-5+deb8u2, OpenSSL 1.0.1k 8 Jan 2015debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to $host [$ip address] port $port.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ben/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u2debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u2debug1: match: OpenSSH_6.7p1 Debian-5+deb8u2 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup [email protected]: kex: server->client aes128-ctr [email protected] nonedebug2: mac_setup: setup [email protected]: kex: client->server aes128-ctr [email protected] nonedebug1: sending SSH2_MSG_KEX_ECDH_INITdebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ECDSA 10:b9:0c:fa:8f:69:f2:eb:84:bd:69:32:50:1b:dd:eedebug1: Host '$host' is known and matches the ECDSA host key.debug1: Found key in /home/ben/.ssh/known_hosts:1debug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/ben/.ssh/id_rsa ((nil)),debug2: key: /home/ben/.ssh/id_dsa ((nil)),debug2: key: /home/ben/.ssh/id_ecdsa ((nil)),debug2: key: /home/ben/.ssh/id_ed25519 ((nil)),debug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /home/ben/.ssh/id_rsadebug1: Trying private key: /home/ben/.ssh/id_dsadebug1: Trying private key: /home/ben/.ssh/id_ecdsadebug1: Trying private key: /home/ben/.ssh/id_ed25519debug2: we did not send a packet, disable methoddebug1: No more authentication methods to try.Permission denied (publickey). Details of the .ssh directory and content, ls - lddrwxr-xr-x 2 $user $user .ls -ltotal 16-rw------- 1 $user $user 733 May 3 16:27 authorized_keys-rw------- 1 $user $user 3243 May 9 15:33 key -rw-r--r-- 1 $user $user 751 May 9 15:33 key.pub -rw-r--r-- 1 $user $user 444 May 9 15:31 known_hosts | As far as I'm aware, ssh only searches for keys with the name id_rsa , id_dsa and a few others which all start id_ as show in the output in your question. If you have keys named anything else you must specify them on the command line, or in an ssh config file. Either rename your file key to something ssh searches for, or update .ssh/config with a relevant stanza, or use the -i option. You can use something like this in .ssh/config host my.target.serverIdentityFile ~/.ssh/key You can also use, host *IdentityFile ~/.ssh/key to force ssh to use ~/.ssh/key for all connections. It may be easier to rename the key file to id_dsa or id_rsa though (assuming the file is actually called key as in your output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/282190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169068/"
]
} |
282,192 | I have two files File1: abc File2: 123 now I need to combine them to one csv file a;1b;2c;3 As the files are really huge, I would rather not use cat and sed to process the second file. (For smaller files I can use a script). Any Idea ? awk / perl ? | Try paste command : paste -d';' File1 File2 > File3 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/282192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37661/"
]
} |
282,199 | Assuming I want to test if a library is installed and usable by a program. I can use ldconfig -p | grep mylib to find out if it's installed on the system. but what if the library is only known via setting LD_LIBRARY_PATH ? In that case, the program may be able to find the library, but ldconfig won't. How can I check if the library is in the combined linker path? I'll add that I'm looking for a solution that will work even if I don't actually have the program at hand (e.g. the program isn't compiled yet), I just want to know that a certain library exists in ld 's paths. | ldconfig can list all the libraries it has access to. These libraries are also stored in its cache. /sbin/ldconfig -v -N will crawl all the usual library paths, list all the available libraries, without reconstructing the cache (which is not possible if you're a non-root user). It does NOT take into account libraries in LD_LIBRARY_PATH (contrarily to what this post said before edit) but you can pass additional libraries to the command line by using the line below: /sbin/ldconfig -N -v $(sed 's/:/ /g' <<< $LD_LIBRARY_PATH) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/282199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17639/"
]
} |
282,215 | From what I understand, a compiler makes a binary file that consists of 1's and 0's that a CPU can read. I have a binary file but how do I open it to see the 1's and 0's that are there? A text editor says it can't open it... P.S. I have an assembly compiled binary that should be plain binary code of 1's and 0's? | According to this answer by tyranid : hexdump -C yourfile.bin unless you want to edit it of course. Most Linux distros have hexdump by default (but obviously not all). Update According to this answer by Emilio Bool : xxd does both binary and hexadecimal For bin : xxd -b file For hex : xxd file | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/282215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169672/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.