source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
372,710
I'm using KDE neon distribution (Ubuntu 16.04 LTS + latest KDE5 DE). Suspend+resume looks to be mostly working on my notebook, but the labels under icons (I have "folder view" set up as background in plasma shell) are corrupted, like this: On the left side corrupted icon, on the right fixed by dragging the icon few pixels and letting it drop back to its original place. Looks to me, as it may be not graphics driver issue, but even KDE5 plasma folder view caching bug? QUESTION: how to refresh the whole desktop easily? The KDE menu "Refresh Desktop" does not help (I guess there's some cache for icons, and it is not invalidated). how to create some high quality bug report, what kind of logs/commands output is worth of it, and where even to start to hunt down this one. While I'm programmer myself, I don't do any Qt/KDE5 development, so I don't have even idea, which part of KDE is responsible for these, where to look for errors and which tools are available for diagnostics. A quick look into dmesg and /var/log/Xorg.0.log didn't bring up anything suspicious. lshw -c video *-display description: 3D controller product: GM107M [GeForce GTX 960M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:130 memory:de000000-deffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:df000000-df07ffff *-display description: VGA compatible controller product: Intel Corporation vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list rom configuration: driver=i915_bpo latency=0 resources: irq:125 memory:dd000000-ddffffff memory:b0000000-bfffffff ioport:f000(size=64) glxinfo | grep OpenGL OpenGL vendor string: NVIDIA CorporationOpenGL renderer string: GeForce GTX 960M/PCIe/SSE2OpenGL core profile version string: 4.5.0 NVIDIA 375.66OpenGL core profile shading language version string: 4.50 NVIDIAOpenGL core profile context flags: (none)OpenGL core profile profile mask: core profileOpenGL core profile extensions:OpenGL version string: 4.5.0 NVIDIA 375.66OpenGL shading language version string: 4.50 NVIDIAOpenGL context flags: (none)OpenGL profile mask: (none)OpenGL extensions:OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 375.66OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20OpenGL ES profile extensions:
I know your pain, this has been annoying me for months now. The only way to fix the desktop I've found is brute force, I made a shortcut to do this and run it every time I resume from standby: killall plasmashell; kstart plasmashell EDIT: 2020/10/6 - this bug has since been fixed, but for reference: restart command for plasma 5.18.5 is now: kstart5 plasmashell -- --replace I can't properly answer this but I'm posting all the info I've got so I can link this from the bug report page. The glitching is a long standing issue with the NVidia drivers and KDE plasma, previously the same question was asked here but it got falsely marked as a duplicate of a similar related issue: https://askubuntu.com/questions/897928/kde-desktop-icons-glitched-after-suspend-kubuntu-16-10 I had some hope after the recent 5.10.3 plasma update as it was supposed to be fixed https://bugs.kde.org/show_bug.cgi?id=344326 https://www.phoronix.com/scan.php?page=news_item&px=KDE-Plasma-5.10.3-Released , but it didn't fix the issue for me. I'm going to follow up on that bug report with a link to this post so also attaching an image of the bug on my system here. (EDIT: found the actual bug report for Plasma https://bugs.kde.org/show_bug.cgi?id=382115 ) (EDIT2: found the bug report for QT: https://bugreports.qt.io/browse/QTBUG-56610 and NVidia forum thread https://devtalk.nvidia.com/default/topic/971972/linux/icon-text-label-corruption-with-kde-plasma-5-desktop-folder-view/ ) $ cat /etc/issueUbuntu 17.04 \n \l$ uname -aLinux desktop 4.10.0-26-generic #30-Ubuntu SMP Tue Jun 27 09:30:12 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux$ plasmashell --versionplasmashell 5.10.3
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237262/" ] }
372,733
I am writing a bash script; I execute a certain command and grep. pfiles $1 2> /dev/null | grep name # $1 Process Id The response will be something like: sockname: AF_INET6 ::ffff:10.10.50.28 port: 22peername: AF_INET6 ::ffff:10.16.6.150 port: 12295 The response can be no lines, 1 line, or 2 lines. In case grep returns no lines (grep return code 1), I abort the script; if I get 1 line I invoke A() or B() if more than 1 line. grep's return code is 0 when the output is 1-2 lines. grep has return value (0 or 1) and output. How can I catch them both ? If I do something like: OUTPUT=$(pfiles $1 2> /dev/null | grep peername) Then variable OUTPUT will have the output (string); I also want the boolean value of grep execution.
You can use output=$(grep -c 'name' inputfile) The variable output will then contain the number 0 , 1 , or 2 . Then you can use an if statement to execute different things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201338/" ] }
372,736
So I just installed Debian 9.0.0 on my PC and now I can't download packages with the Synaptic packet manager as the sources.list file under /etc/apt/ has only the DVD set. All other lines are commented out, with this text above the 2 lines I'd like to take back in: # Line commented out by installer because it failed to verify: This is probably due to the fact that I wasn't connected to the Internet while installing Debian using the DVD. Also apparently the DVD isn't detected under Debian as under /media/ it only says cdrom and cdrom0 no matter if I insert the DVD or not with both being empty. -> Not sure if that is a separate issue? I cannot edit the sources.list file by just opening it with the texteditor as it's write-protected. I thought about installing leafpad from here: https://packages.debian.org/stretch/amd64/leafpad/download and I'm not sure if that would help. To me it seems that the most straight-forward way would be to open the texteditor as root, comment out the DVD sources and take the 2 security.debian.org sources back in. However I'm not sure how to do that. I tried sudo gedit which gets me this (I translated the part after Unable to init server: ): No protocol specifiedUnable to init server: Connection failed:connection buildup denied(gedit:1297): Gtk-WARNING **: cannot open display: :0 I would greatly appreciate any help.
You can use output=$(grep -c 'name' inputfile) The variable output will then contain the number 0 , 1 , or 2 . Then you can use an if statement to execute different things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
372,804
ls "*" This shows nothing in my directory. Why? Isn't * a wildcard that will show everything?
When it's in double quotes the * doesn't get treated as a glob, and so doesn't get expanded. So you're asking ls to list a file named * , which probably doesn't exist. To see all the files, you could run ls without any arguments as it's default behavior is to show you all the files in the current directory. If you wanted to pass all the files as arguments to ls for some reason, just remove the quotes so you run ls * but that's really similar to ls except that if you have a lot of files * might expand to pass too many arguments to ls , and also ls * will show the contents of directories while ls by itself will just show that the directories are in the current directory without descending into them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237306/" ] }
372,816
I am trying to get a set number of random lines that satisfy a condition. e.g. if my file was: a 1 5b 4 12c 2 3e 6 14f 7 52g 1 8 then I would like exactly two random lines where the difference between column 3 and column 2 is greater than 3 but less than 10 (e.g. lines starting with a, b, e, and g would qualify) How would I approach this? awk (if something and random) '{print $1,$2,$3}'
You can do this in awk but getting the random selection of lines will be complex and will require writing quite a bit of code. I would instead use awk to get the lines that match your criteria and then use the standard tool shuf to choose a random selection: $ awk '$3-$2>3 && $3-$2 < 10' file | shuf -n2g 1 8a 1 5 If you run this a few times, you'll see you get a random selection of lines: $ for i in {1..5}; do awk '$3-$2>3 && $3-$2 < 10' file | shuf -n2; echo "--"; doneg 1 8e 6 14--g 1 8e 6 14--b 4 12g 1 8--b 4 12e 6 14--e 6 14b 4 12-- The shuf tool is part of the GNU coreutils, so it should be installed by default on most any Linux system and easily available for most any *nix.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182628/" ] }
372,840
When using Linux, one can easily create a regular file of some size and then create a filesystem in it. How can one do the same thing when using FreeBSD? I tried this: root@:/tmp/test # newfs -U ~/disknewfs: /root/disk: not a character-special device: No error: 0newfs: no valid label found I didn't find any relevant information on this (e.g.: "Use switch -i to allow the filesystem to be created on a regular file instead of only on a character device." on the (fairly short) man page of newfs .
Create the file; "1g" stands for one gigabyte: truncate -s 1g disk.img Attach the file as a virtual memory disk; this will print the allocated device name, eg "md0": mdconfig disk.img Create a filesystem on that memory disk: newfs /dev/md0 And finally mount it: mount /dev/md0 /mnt You can use mdconfig -lv to show currently attached memory disks. Also note that the memory disk - md0 in this case, link - is a GEOM provider, so for all practical intents and purposes behaves as a disk. Which means, if you do an image of a physical disk, and attach that image using mdconfig(8), GEOM will automatically probe partitions, so you'll get /dev/md0p1, /dev/md0p2 etc. You can also use geli(8) to encrypt its contents, or create a zpool on them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
372,850
Is it possible to run a command with parameters first of which starts with - (dash) e.g. /usr/bin/echo -n foo as different user and group, for example apache:apache using command su when login shell is set to /sbin/nologin ? I tried: su -s "/usr/bin/echo" -g apache apache -n foo fails with su: invalid option -- 'n' . It looks like first argument may not start with dash. su -c "/usr/bin/echo -n foo" -g apache apache fails with nologin: invalid option -- 'c' . It looks like -c can't be used if login shell is /sbin/nologin
su -s /bin/bash -c "/usr/bin/echo -n foo" -g apache apache -s /bin/bash overrides nologin and allows to interpret value of -c option -c "/usr/bin/echo -n foo" allows to avoid using dash-starting first argument
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/372850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46334/" ] }
372,857
I am creating a generic compilation/transpilation system. One way to know if a file has been compiled/transpiled already would be to compare the source and target file modification dates. I need to write a bash script that can do that: source_file=foo;target_file=bar;stat_source=$(stat source_file);stat_target=$(stat target_file); but how can I extract the dates from the stat output and compare them? Is there a better way than stat to compare the most recent modification time of a file? If I call stat on a log file, I get this: 16777220 12391188 -rw-r--r-- 1 alexamil staff 0 321 "Jun 22 17:45:53 2017" "Jun 22 17:20:51 2017" "Jun 22 17:20:51 2017" "Jun 22 15:40:19 2017" 4096 8 0 test.log AFAICT, the time granularity is no finer than seconds. I need to get something more granular than that if possible.
Given that you're using the stat (similar functionality, but different output format on BSDs and GNU), you could also use the test utility, which does this comparison directly: FILE1 -nt FILE2 FILE1 is newer (modification date) than FILE2 FILE1 -ot FILE2 FILE1 is older than FILE2 In your example, if [ "$source_file" -nt "$target_file" ]then printf '%s\n' "$source_file is newer than $target_file"fi The feature is not available in POSIX (see its documentation for test ), which provides as a rationale: Some additional primaries newly invented or from the KornShell appeared in an early proposal as part of the conditional command ([[]]): s1 > s2, s1 < s2, str = pattern, str != pattern, f1 -nt f2, f1 -ot f2, and f1 -ef f2. They were not carried forward into the test utility when the conditional command was removed from the shell because they have not been included in the test utility built into historical implementations of the sh utility. That might change in the future though as the feature is widely supported. Note that when operands are symlinks, it's the modification time of the target of the symlink that is considered (which is generally what you want, use find -newer instead if not). When symlinks cannot be resolved, the behaviour between implementations (some consider an existing file as always newer than one that can't resolve, some will always report false if any of the operands can't be resolved). Also note that not all implementations support sub-second granularity ( bash 's test / [ builtin as of version 4.4 still doesn't for instance, GNU test and the test builtin of zsh or ksh93 do, at least on GNU/Linux). For reference: For the GNU test utility implementation (though note that your shell, if fish or Bourne-like, will also have a test / [ builtin that typically shadows it, use env test instead of test to bypass it), get_mtime in test.c reads struct timespec , and option -nt uses that data
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
372,917
I have a text file named xid.txt : xid: SC48028 id: artf398444xid: indv1000 id: indv24519xid: SC32173 id: artf398402xid: SC21033 id: artf398372xid: 1001 id: tracker4868xid: wiki1000 id: wiki10709xid: proj1234 id: proj12556 I need to add a string 'PT_' before 'SC48028' , 'SC32173' ... so on. The string 'SC...' can start with any combination may be 'AC...' or 'DL..' Required output: xid: PT_SC48028 id: artf398444xid: indv1000 id: indv24519xid: PT_SC32173 id: artf398402xid: PT_SC21033 id: artf398372xid: 1001 id: tracker4868xid: wiki1000 id: wiki10709xid: proj1234 id: proj12556 If you see the above output, we should not insert 'PT_' before strings which start with 'i' , 'p' , 'w' & 'numerical' . I have tried a few basic commands for my requirement using insert/append in sed.
With awk : awk '$2~/^[A-Z][A-Z]/{ $2="PT_"$2 }1' xid.txt The output: xid: PT_SC48028 id: artf398444xid: indv1000 id: indv24519xid: PT_SC32173 id: artf398402xid: PT_SC21033 id: artf398372xid: 1001 id: tracker4868xid: wiki1000 id: wiki10709xid: proj1234 id: proj12556 $2~/^[A-Z][A-Z]/ - if the 2nd field starts with 2 uppercase letters Or sed approach: sed -i 's/^\(xid:[[:space:]]*\)\([A-Z]\{2\}[^[:space:]]*\)/\1PT_\2/' xid.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219358/" ] }
372,921
I am using Ubuntu mate but I want to switch over the Debian mate or Debian lxde (testing). Thus i need to know following points Will my Debian testing will remain testing for all the time? suppose i had install Debian testing buster.after few years it became the the stable.So will my Debian also become stable?Will i still receive testing updates? I want to install Debian mate(or Lxde) testing but i didn't find it on there site.should i download the entire dvd-1 of Debian testing.iso or should i do netinst?Or should I download all 3 dvds of Debian testing?I tried to search live cds but they all were stable(outdated) Is debian testing is not good or unstable?does it crash or does it have so many bugs? Is it compatible for my latop?here is the output of lscpu command smit@Smit-Aspire-5742:~$ lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 2Core(s) per socket: 2Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 37Model name: Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHzStepping: 5CPU MHz: 933.000CPU max MHz: 2399.0000CPU min MHz: 933.0000BogoMIPS: 4787.75Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 3072KNUMA node0 CPU(s): 0-3Flags: fpu vme de pse tsc msr pae mce cx8 apic sepmtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good noplxtopologynonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat
With awk : awk '$2~/^[A-Z][A-Z]/{ $2="PT_"$2 }1' xid.txt The output: xid: PT_SC48028 id: artf398444xid: indv1000 id: indv24519xid: PT_SC32173 id: artf398402xid: PT_SC21033 id: artf398372xid: 1001 id: tracker4868xid: wiki1000 id: wiki10709xid: proj1234 id: proj12556 $2~/^[A-Z][A-Z]/ - if the 2nd field starts with 2 uppercase letters Or sed approach: sed -i 's/^\(xid:[[:space:]]*\)\([A-Z]\{2\}[^[:space:]]*\)/\1PT_\2/' xid.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
372,952
The ls can give a result like [root@localhost ~]# cd /etc/yum.repos.d/[root@localhost yum.repos.d]# ls CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo But actually I hope to just find out CentOS-Base.repo , CentOS-Debuginfo.repo and CentOS-Vault.repo but not CentOS-Media.repo .So I run this command ls [^\(Media\)] But I get a error information.How should I do?
In most simple case you may use the following (in case if the 1st subword is static CentOS ): ls CentOS-[BDV]* [BDV] - character class to ensure the second subword starting with one of the specified characters or the same with negation : ls CentOS-[^M]* If you want to ignore all filenames that contain the M character, with the GNU implementation of ls (as typically found on CentOS), use the -I ( --ignore ) option: ls -I '*M*' -I, --ignore = PATTERN do not list implied entries matching shell PATTERN To ignore entries with Media word: ls -I '*Media*' Those patterns need to be passed verbatim to ls , so must be quoted (otherwise, the shell would treat them as globs to expand).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237289/" ] }
372,994
I'm trying to get all files by mask in some directory without recursively searching in subdirs. There is no option -maxdepth 0 in AIX for that. I've heard about -prune , but still can't get how it works. I guess the command should look something like find dir \( ! -name dir -prune -type f \) -a -name filemask but it doesn't work. Could you please write a correct command for me and explain how it will work? UPD It seems command find dir ! -path dir -prune prints all files and catalogs in dir , but not files and catalogs in dir/* , so I can use it for my case.
You'd want: find dir/. ! -name . -prune -type f -name filemask Or: find dir ! -path dir -prune -type f -name filemask To find the regular files called filemask in dir without searching in sub-directories of dir . With find dir ! -name dir -prune , you'd have issues if there was a dir/dir directory. The dir/. approach works around that because find will not come across any other file called . than that dir/. file passed as argument. The -path approach works around it by looking at the file path of the files (as opposed to just the name), -path dir will match on dir , but not on dir/dir (so dir will be the only directory it will not prune). -path may not be available in older versions of AIX though. More generally, for the standard equivalent of GNU's -maxdepth n , see Limit POSIX find to specific depth?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216422/" ] }
373,063
Long story short, I need to perform this all automatically on boot (embedded system). Our engineers will flash images to production devices. These images will contain a small partition table. On boot, I need to automatically expand the last partition (#3) to use all the available space on the disk. Here is what I get when I look at the free space on my disk. > parted /dev/sda print freeModel: Lexar JumpDrive (scsi)Disk /dev/sda: 32.0GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 17.4kB 1049kB 1031kB Free Space 1 1049kB 25.3MB 24.2MB fat16 primary legacy_boot 25.3MB 26.2MB 922kB Free Space 2 26.2MB 475MB 449MB ext4 primary 3 475MB 1549MB 1074MB ext4 primary 1549MB 32.0GB 30.5GB Free Space I need to expand partition 3 by N (30.5GB) number of bytes How do I perform this step automatically, with no prompt? This needs to work with a dynamic size of space available after the 3rd partition.
In current versions of parted , resizepart should work for the partition ( parted understands 100% or things like -1s , the latter also needs -- to stop parsing options on the cmdline). To determine the exact value you can use unit s , print free . resize2fs comes afterwards for the filesystem. Old versions of parted had a resize command that would resize both partition and filesystem in one go, it even worked for vfat . In a Kobo ereader modification I used this to resize the 3rd partition of internal memory to the maximum: (it blindly assumes there to be no 4th partition and msdos table and things) start=$(cat /sys/block/mmcblk0/mmcblk0p3/start)end=$(($start+$(cat /sys/block/mmcblk0/mmcblk0p3/size)))newend=$(($(cat /sys/block/mmcblk0/size)-8))if [ "$newend" -gt "$end" ]then parted -s /dev/mmcblk0 unit s resize 3 $start $newendfi So you can also obtain the values from /sys/block/.../ if the kernel supports it. But parted removed the resize command so you have to do two steps now, resizepart to grow the partition, and whatever tool your filesystem provides to grow that, like resize2fs for ext* .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87529/" ] }
373,095
A month ago I wrote a Python script to map MAC and IP addresses from stdin. And two days ago I remembered it and used to filter output of tcpdump but it went wrong because of a typo.I typed tcpdump -ne > ./mac_ip.py and the output is nothing. But the output should be "Unknown" if it can't parse the input, so I did cat ./mac_ip.py and found all the tcpdump data instead of the program. Then I realized that I should use tcpdump -ne | ./mac_ip.py Is there any way to get my program back? Anyways I can write my program again, but if it happens again with more important program I should be able to do something. OR is there any way to tell output redirection to check for the file and warn if it is an executable?
Sadly I suspect you'll need to rewrite it. (If you have backups, this is the time to get them out. If not, I would strongly recommend you set up a backup regime for the future. Lots of options available, but off topic for this answer.) I find that putting executables in a separate directory, and adding that directory to the PATH is helpful. This way I don't need to reference the executables by explicit path. My preferred programs directory for personal (private) scripts is "$HOME"/bin and it can be added to the program search path with PATH="$HOME/bin:$PATH" . Typically this would be added to the shell startup scripts .bash_profile and/or .bashrc . Finally, there's nothing stopping you removing write permission for yourself on all executable programs: touch some_executable.pychmod a+x,a-w some_executable.py # chmod 555, if you preferls -l some_executable.py-r-xr-xr-x+ 1 roaima roaima 0 Jun 25 18:33 some_executable.pyecho "The hunting of the Snark" > ./some_executable.py-bash: ./some_executable.py: Permission denied
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237535/" ] }
373,143
Today I learned of a Windows-only product today called the Overwolf Replay HUD , which lets the user press a key to replay the last 20 seconds of happenings on their screen. It's meant for people playing or spectating fast-paced videogames who want to quickly review a hectic moment. I'm trying to duplicate that behaviour on Linux. So far, I figure I could easily start ffmpeg (with -f x11grab ) capture to a file in /tmp (which is memory-mapped), then use sxhkd to bind a keyboard shortcut to launch mpv to play the last 20 seconds of that file. However, the rest of the recording would still be stored, and I'd eventually run out of RAM. How could I go about keeping only the last 20 seconds?
The segment muxer will work. Step 1 : ffmpeg -i input force_key_frames expr:gte(t,n_forced*4) -c:v libx264 -c:a aac -f segment -segment_time 4 -segment_wrap 6 -segment_list list.m3u8 -segment_list_size 6 seg%d.ts This will save the recording in segments of 4 seconds. Once 6 segments have been written, the next segment will overwrite the first file. The playlist is updated accordingly. Step 2 : ffmpeg -i list.m3u8 -c copy video.mp4 or ffplay list.m3u8 The duration of the preserved footage is 20 < duration < 24 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16404/" ] }
373,147
Lets say I have a file called sample.txt which contains abbcac grep -E "^b|c$" sample.txt gives me output as bcac Now I want the filter string to be added to the output.I want the output as bc,bac,c How can I achieve this ?
The segment muxer will work. Step 1 : ffmpeg -i input force_key_frames expr:gte(t,n_forced*4) -c:v libx264 -c:a aac -f segment -segment_time 4 -segment_wrap 6 -segment_list list.m3u8 -segment_list_size 6 seg%d.ts This will save the recording in segments of 4 seconds. Once 6 segments have been written, the next segment will overwrite the first file. The playlist is updated accordingly. Step 2 : ffmpeg -i list.m3u8 -c copy video.mp4 or ffplay list.m3u8 The duration of the preserved footage is 20 < duration < 24 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237570/" ] }
373,156
I have a question about for loop in shell. Let's assume this simple shell script: #!/bin/shloop() { for i in 1 2 3 4; do if [ $i -eq 2 ]; then [ $1 -eq 2 ] && return 1 loop $(($1 + 1)) && return 1 fi done return 1 } loop 0 All variables are global, except for arguments (and function arguments). So if I want a local variable in functionI would have to pass it as argument. I tried to run this simple script, but I'm not sure if also the for loop list (1 2 3 4 in this example) is also local? See below: + loop 0+ for i in 1 2 3 4+ '[' 1 -eq 2 ']'+ for i in 1 2 3 4+ '[' 2 -eq 2 ']'+ '[' 0 -eq 2 ']'+ loop 1+ for i in 1 2 3 4+ '[' 1 -eq 2 ']'+ for i in 1 2 3 4+ '[' 2 -eq 2 ']'+ '[' 1 -eq 2 ']'+ loop 2+ for i in 1 2 3 4+ '[' 1 -eq 2 ']'+ for i in 1 2 3 4+ '[' 2 -eq 2 ']'+ '[' 2 -eq 2 ']'+ return 1+ for i in 1 2 3 4+ '[' 3 -eq 2 ']'+ for i in 1 2 3 4+ '[' 4 -eq 2 ']' <- here is $i == 4+ return 1+ for i in 1 2 3 4+ '[' 3 -eq 2 ']' <- here is $i == 3, correctly behaving as local variable ...+ for i in 1 2 3 4+ '[' 4 -eq 2 ']'+ return 1 Can anyone please tell me, how the for loop works internally? I am bit confused about the for loop list, that is behaving like "local variable". Thank you very much for all your answers! :)
The segment muxer will work. Step 1 : ffmpeg -i input force_key_frames expr:gte(t,n_forced*4) -c:v libx264 -c:a aac -f segment -segment_time 4 -segment_wrap 6 -segment_list list.m3u8 -segment_list_size 6 seg%d.ts This will save the recording in segments of 4 seconds. Once 6 segments have been written, the next segment will overwrite the first file. The playlist is updated accordingly. Step 2 : ffmpeg -i list.m3u8 -c copy video.mp4 or ffplay list.m3u8 The duration of the preserved footage is 20 < duration < 24 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221565/" ] }
373,186
I have a script which runs in the background (without terminal windows or TTY) and occasionally does something. I now want it to do another thing when it does something and it is to open a Gnome Terminal window and execute 2 commands. Actually, I only need to execute 1 command but I want to make the terminal window stay open so I can see the output of the command. The command prints both on stdout and stderr and its output changes over time, so just writing it to a file and sending some kind of notification wouldn't do the job very well. I can get Gnome Terminal to open a window and execute 1 command: gnome-terminal -e "sleep 10" I chose sleep as the long-running command for simplicity. However, when adding another command, no terminal window opens: gnome-terminal -e "echo test; sleep 10" What's the solution to this?
gnome-terminal treats everything in quotes as one command, so in order to run many of them consecutively you need to start interpreter (usually a shell), and do stuff inside it, for instance: gnome-terminal -e 'sh -c "echo test; sleep 10"' BTW, you may want the window to stay open even after commands finish their job, in such case just start new shell, or replace a current with the new one: gnome-terminal -e 'sh -c "echo test; sleep 10; exec bash"'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
373,187
I'm using rxvt-unicode , version 9.22 , as a terminal emulator, and configure it with the file ~/.Xresources . When I modify the configuration file, to see the effects immediately I execute the command: xrdb ~/.Xresources From man xrdb : Lines that begin with an exclamation mark (!) are ignored and may be used as comments. On my machine, with xrdb version 1.1.0 , when a commented line contains an odd number of single quotes, for example ! it's a comment , xrdb complains with an error such as: /home/user/.Xresources:1:5: warning: missing terminating ' character ! it's a comment ^ Currently, I double the single quotes to avoid this error: ! it''s a comment I think I could also use /* */ , instead of ! , because it's the comment string used by default by Vim (defined in $VIMRUNTIME/ftplugin/xdefaults.vim ). But I would prefer using ! , because I find comments a little more readable with it. Is there a way to ask xrdb to ignore single quotes inside the commented lines of ~/.Xresources ?
This seems to be due to a change in the default behavior of GNU cpp , which xrdb uses as its default preprocessor. Specifically, according to The C Preprocessor: 10.1 Traditional lexical analysis : Generally speaking, in traditional mode an opening quote need not have a matching closing quote. However cpp provides a command line option to operate in traditional mode: -traditional-traditional-cpp Try to imitate the behavior of pre-standard C preprocessors, as opposed to ISO C preprocessors. See Traditional Mode. while xrdb allows the preprocessor to be defined explicitly on its command line: -cpp filename This option specifies the pathname of the C preprocessor pro‐ gram to be used. Although xrdb was designed to use CPP, any program that acts as a filter and accepts the -D, -I, and -U options may be used. Hence it should be possible to suppress the warning by using xrdb -cpp "/usr/bin/cpp -traditional-cpp" ~/.Xresources or xrdb -cpp "/usr/bin/cpp -traditional" ~/.Xresources
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232487/" ] }
373,223
Suppose the default shell for my account is zsh but I opened the terminal and fired up bash and executed a script named prac002.sh , which shell interpreter would be used to execute the script, zsh or bash? Consider the following example: papagolf@Sierra ~/My Files/My Programs/Learning/Shell % sudo cat /etc/passwd | grep papagolf[sudo] password for papagolf: papagolf:x:1000:1001:Rex,,,:/home/papagolf:/usr/bin/zsh# papagolf's default shell is zshpapagolf@Sierra ~/My Files/My Programs/Learning/Shell % bash# I fired up bash. (See that '%' prompt in zsh changes to '$' prompt, indicating bash.)papagolf@Sierra:~/My Files/My Programs/Learning/Shell$ ./prac002.sh Enter username : RexRex# Which interpreter did it just use? **EDIT : ** Here's the content of the script papagolf@Sierra ~/My Files/My Programs/Learning/Shell % cat ./prac002.sh read -p "Enter username : " unameecho $uname
Because the script does not begin with a #! shebang line indicating which interpreter to use, POSIX says that : If the execl() function fails due to an error equivalent to the [ENOEXEC] error defined in the System Interfaces volume of POSIX.1-2008, the shell shall execute a command equivalent to having a shell invoked with the pathname resulting from the search as its first operand , with any remaining arguments passed to the new shell, except that the value of "$0" in the new shell may be set to the command name. If the executable file is not a text file, the shell may bypass this command execution. In this case, it shall write an error message, and shall return an exit status of 126. That phrasing is a little ambiguous, and different shells have different interpretations. In this case, Bash will run the script using itself . On the other hand, if you ran it from zsh instead, zsh would use sh (whatever that is on your system) instead. You can verify that behaviour for this case by adding these lines to the script: echo $BASH_VERSIONecho $ZSH_VERSION You'll note that, from Bash, the first line outputs your version, while the second never says anything, no matter which shell you use. If your /bin/sh is, say, dash , then neither line will output anything when the script is executed from zsh or dash. If your /bin/sh is a link to Bash, you'll see the first line output in all cases. If /bin/sh is a different version of Bash than you were using directly, you'll see different output when you run the script from bash directly and from zsh. The ps -p $$ command from rools's answer will also show useful information about the command the shell used to execute the script.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152971/" ] }
373,232
I am working on a Debian server where I have only console access without sudo. The main folder called applications includes subfolders with all the projects I have. However, when I create a new project through my admin panel, the folder name is a nonsense string and you can rename it by creating a symlink to a new folder. So, of example, the applications folder is like that: applications/ abuwryjbrb evharjqgxj MyCustomProjectName1 MyCustomProjectName2 I want to check how much space each of the applications use. Since I don't have a lot of experience in Unix, I googled and I found that I can do it with du -sh * . However, the output is like that: 91M abuwryjbrb201M evharjqgxj0 MyCustomProjectName10 MyCustomProjectName2 As a result, it is too time consuming for me to check one by one the names and see which folder is which. Is there any way to get an output with the disk usage for the symlinks instead? Using du -sh -L * instead, I don't get duplicated folders for both original and symlinked, but I get a mixed output like this: 91M abuwryjbrb201M MyCustomProjectName1 Where some of the folders have the original name and some the symlink name
You can get usage of symlinks using -L flag along with du command. du -sh -L * should help you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63005/" ] }
373,239
The freedesktop organization defines the standard for .desktop files. Unfortunately it defines not the permissions of the file (see freedesktop mailinglist ) and software is distributed with a) executable .desktop filesb) non executable .desktop filesc) mixed a) and b) in one software package. This is not very satisfying for Linux distributors , who aim to provide a consistent system. I want to use the broad audience of sx, to find out what advantage has a .desktop file without execution bit? Is there any reason for not having all .desktop files executable if the filesystem alows it? Are there known security problems? Are there programs which have difficulties with executable .desktop files?
One obvious reason a .desktop has not necessarily the executable bit set is these files were not intended to be executable in the first place. A .desktop file contains metadata the tell the desktop environment how to associate programs to file types but was never designed to be executed itself. However, as a .desktop file indirectly tell the graphic environment what to execute, it has an indirect capacity to launch whatever program is defined in it, opening the door to exploits. To avoid malicious .desktop files to be responsible to the launch of hostile or unwanted programs, KDE and gnome developers introduced a custom hack that somewhat deviates the intended Unix file execution permission purpose to add a security layer. With this new layer, only .desktop files with the executable bit set are taken into account by the desktop environment. Just turning a non executable file like a .desktop one to an executable one would be a questionable practice because it introduces a risk. Non binary executable files with no shebang are executed by a shell (be it bash or sh or whatever). Asking the shell to execute a file which is not a shell script has unpredictable results. To avoid that issue, a shebang needs to be present in the .desktop files and should point to the right command designed to handle them, xdg-open , like for example Thunderbird does here: #!/usr/bin/env xdg-open[Desktop Entry]Version=1.0Name=ThunderbirdGenericName=EmailComment=Send and Receive Email... In this case, executing the .desktop file will do whatever xdg-open (and your Desktop Environment) believe is the right thing to do, possibly just opening the file with a browser or a text editor which might not be what you expect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26440/" ] }
373,260
I have a frequently used directory. Suppose it is: /etc/insserv.conf.d/testname I set a variable in my /root/.bashrc : mydir=/etc/insserv.conf.d/testname Now, I can open this directory by this command cd $mydir But I really don't like that character $ . Is there any workaround can implement this? I mean: I want to open this directory just by cd mydir , is it possible in Ubuntu 16.04 ?
You are looking for cdable_vars option. To activate it run shopt -s cdable_vars if you are using bash ( setopt cdablevars in case of zsh). After that simple cd mydir would work. Note that if you try to cd mydir from a directory which contains a file or directory by the same name, then the shell will attempt to use the file or directory object in the current directory, instead of expanding the variable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237289/" ] }
373,309
I would like a command that will resolve a hostname to an IP address, in the same way that a normal program would resolve the hostname. In other words, it has to take into account mDNS ( .local ) and /etc/hosts , as well as regular DNS. So that rules out host , dig , and nslookup , since all three of those tools only use regular DNS and won't resolve .local addresses. On Linux, the getent command does exactly what I want . However, getent does not exist on OS X. Is there a Mac OS X equivalent of getent ? I'm aware that I could write one in a few lines using getaddrinfo , and that's what I'll do if I have to, but I was just wondering if there was already a standard command that could do it. Thanks!
I think dscacheutil is what you're looking for. It supports caching, /etc/hosts, mDNS (for .local). dscacheutil -q host -a name foo.local Another option is dns-sd dns-sd -q foo.local More information about dnscacheutil .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237015/" ] }
373,312
For years, the OOM killer of my operating system doesn't work properly and leads to a frozen system. When the memory usage is very high, the whole system tends to "freeze" (in fact: becoming extremely slow) for hours or even days , instead of killing processes to free the memory. The maximum that I have recorded is 7 days before resigning myself to operate a reset. When OOM is about to be reached, the iowait is very very high (~ 70%), before becoming unmeasurable. The tool: iotop has showed that every programs are reading at a very high throughput (per tens of MB/sec) from my hard drive. What those programs are reading ? - The directory hierarchy ? - The executable code itself ? I don't exactly now. [edited] At the time I wrote this message (in 2017) I was using an uptodate ArchLinux (4.9.27-1-lts), but had already experienced the issue for years before. I have experienced the same issue with various Linux distributions and different hardware configurations. Currently (2019), I am using an uptodate Debian 9.6 (4.9.0)I have 16 GB of physical ram, a SSD on which my OS is installed, and not any swap partition. Because of the amount of ram that I have, I don't want to enable a swap partition, since it would just delay the apparition of the issue. Also, with SSDs swapping too often could potentially reduce the lifespan of the disk. By the way, I've already tried with and without a swap partition, it has proved to only delay the apparition of the problem, but not being the solution. To me the problem is caused by the fact that Linux drops essential data from the caches , which leads to a frozen system because it has to read everything, every time from the hard drive. I even wonder if Linux wouldn't drop the executable code pages of running programs, which would explain why programs that normally don't read a lot of data, behave this way in this situation. I have tried several things in the hope to fix this issue. One was to set /proc/sys/vm/min_free_kbytes to 1000000 (1 GB). Because this 1 GB should remain free, I thought that this memory would be reserved by Linux to cache important data. But it hasn't worked. Also, I think useful to add that even if it could sound great in theory, restricting the size of the virtual memory to the size of the physical memory, by defining /proc/sys/vm/overcommit_memory to 2 isn't decently technically possible in my situation, because the kind of applications that I use, require more virtual memory than they effectively use for some reasons. According to the file /proc/meminfo , the Commited_AS value is often higher than the double of the physical ram on my system (16 GB, Commited_AS is often > 32 GB). I have experienced this problem with /proc/sys/vm/overcommit_memory to its default value: 0 , and for a while I have defined it to: 1 , because I preferred programs to be killed by the OOM killer rather than behaving wrongly because they don't check the return values of malloc when the allocations are refused. When I was talking about this issue on IRC , I have met other Linux users who have experienced this very same problem, so I guess that a lot of users are concerned by this. To me this is not acceptable since even Windows deals better with high memory usage. If you need more information, have a suggestion, please tell me. Documentation: https://en.wikipedia.org/wiki/Thrashing_%28computer_science%29 https://en.wikipedia.org/wiki/Memory_overcommitment https://www.kernel.org/doc/Documentation/sysctl/vm.txt https://www.kernel.org/doc/Documentation/vm/overcommit-accounting https://lwn.net/Articles/317814/ They talk about it: Why does linux out-of-memory (OOM) killer not run automatically, but works upon sysrq-key? Why does OOM-killer sometimes fail to kill resource hogs? Preloading the OOM Killer Is it possible to trigger OOM-killer on forced swapping? How to avoid high latency near OOM situation? https://lwn.net/Articles/104179/ https://bbs.archlinux.org/viewtopic.php?id=233843
I've found two explanations(of the same thing) as to why kswapd0 does constant disk reading happens well before OOM-killer kills the offending process: see the answer and comment of this askubuntu SE answer see the answer and David Schwartz's comments of this answer on unix SE I'll quote here the comment from 1. which really opened my eyes as to why I was getting constant disk reading while everything was frozen : For example, consider a case where you have zero swap and system is nearly running out of RAM. The kernel will take memory from e.g. Firefox (it can do this because Firefox is running executable code that has been loaded from disk - the code can be loaded from disk again if needed). If Firefox then needs to access that RAM again N seconds later, the CPU generates "hard fault" which forces Linux to free some RAM (e.g. take some RAM from another process), load the missing data from disk and then allow Firefox to continue as usual. This is pretty similar to normal swapping and kswapd0 does it. – Mikko Rantalainen Feb 15 at 13:08 If anyone has a way as to how to disable this behavior(maybe recompile kernel with what options? ), please let me know as soon as possible! Much appreciated, thanks! UPDATE: The only way I've found thus far is through patching the kernel, and it works for me with swap disabled(ie. CONFIG_SWAP is not set ) but doesn't work for others with swap enabled it seems ; see the patch inside this question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176504/" ] }
373,377
My default shell environment is bash and I would like to start a xterm with zsh and execute some commands. In general, to execute some commands, I am using the following xterm -e "ls -lrt;pwd;whoami" This is executing the commands in bash shell with xterm . To start the xterm with different shell, I am using the following. xterm -ls /bin/zsh So, how can I combine both of these ? When I tried. I got the below error. [dinesh@mypc]$ xterm -ls /bin/zsh -e "ls"xterm: bad command line option "/bin/zsh" How to solve this ?
No, the -ls option to xterm doesn't take an argument, it just specifies that the shell that xterm start should be a login shell. Here's the complete section on the -ls flag with the part which is relevant to your issue highlighted: -ls This option indicates that the shell that is started in the xterm window will be a login shell (i.e., the first character of argv[0] will be a dash, indicating to the shell that it should read the user's .login or .profile). The -ls flag and the loginShell resource are ignored if -e is also given, because xterm does not know how to make the shell start the given command after whatever it does when it is a login shell - the user's shell of choice need not be a Bourne shell after all. Also, xterm -e is supposed to provide a consistent functionality for other applications that need to start text-mode programs in a window, and if loginShell were not ignored, the result of ~/.profile might interfere with that. If you do want the effect of -ls and -e simultaneously, you may get away with something like xterm -e /bin/bash -l -c "my command here" Finally, -ls is not completely ignored, because xterm -ls -e does write a /var/run/wtmp entry (if configured to do so), whereas xterm -e does not.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82015/" ] }
373,402
I have a laptop which I intend to be dual-boot. It previously booted into Windows (7), and after Linux install now boots directly into Linux (openSUSE). I've edited /etc/grub.d/40_custom to add the Windows chainloader entry. So far, so good. Unfortunately I can't get GRUB2 to display the selection menu at all , even to select the Safe Mode entry for the Linux install. I get a split-second flash of the "welcome to grub" message and then it boots directly into the default Linux entry. Things I have tried: Setting GRUB_TIMEOUT to an integer value and GRUB_HIDDEN_TIMEOUT to 0 blank commented out Setting GRUB_HIDDEN_TIMEOUT_QUIET to both true and false Setting GRUB_TERMINAL to console Removing quiet and splash=silent from GRUB_CMDLINE_LINUX_DEFAULT I am regenerating the config each time with /usr/sbin/grub2-mkconfig Other info: Holding shift during boot doesn't bring up the menu regardless of the state of GRUB_HIDDEN_TIMEOUT I'm pretty sure this machine isn't using UEFI (I have no /sys/firmware/efi directory) Legacy USB support is enabled in the BIOS. Anything else I can try? This is getting really aggravating, I never had this much trouble with grub legacy! Edit: Section of grub.cfg related to timeout: if [ x${boot_once} = xtrue]; then set timeout=0 elif [ x$feature_timeout_style = xy ]; then set timeout_style=menu set timeout=0 # Fallback normal timeout code in case the timeout_style feature is unavailableelse set timeout=0 fi This is different to the output displayed by the grub update script, which has timeout = 10 ! Editing the grub.cfg file directly displays the menu as expected.
Do this lines exist in the your /etc/default/grub ? If not, add them. GRUB_TIMEOUT=10GRUB_TIMEOUT_STYLE=menu run update-grub afterwards to update /boot/grub/grub.cfg You can check, if the needed changes has happened, by this way: grep -i timeout /boot/grub/grub.cfg Output should be contained such values: set timeout_style=menuset timeout=10 From the grub manual : GRUB_TIMEOUT Boot the default entry this many seconds after the menu is displayed, unless a key is pressed. The default is 5 . Set to 0 to boot immediately without displaying the menu, or to -1 to wait indefinitely. If GRUB_TIMEOUT_STYLE is set to countdown or hidden , the timeout is instead counted before the menu is displayed. GRUB_TIMEOUT_STYLE If this option is unset or set to menu , then GRUB will display the menu and then wait for the timeout set by GRUB_TIMEOUT to expire before booting the default entry. Pressing a key interrupts the timeout. If this option is set to countdown or hidden , then, before displaying the menu, GRUB will wait for the timeout set by GRUB_TIMEOUT to expire. If ESC is pressed during that time, it will display the menu and wait for input. If a hotkey associated with a menu entry is pressed, it will boot the associated menu entry immediately. If the timeout expires before either of these happens, it will boot the default entry. In the countdown case, it will show a one-line indication of the remaining time.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22944/" ] }
373,407
I am getting this output from ls -a for a specific dir` d?????????? ? ? ? ? ? dmsnl857-vm This was a mount dir according to my /etc/fstab //192.168.33.55/DMS /home/pkaramol/Workspace/servers/dmsnl857-vm/ cifs credentials=/home/pkaramol/.smb857cred,sec=ntlm 0 0 I am unable to perform any action on it any more, including deleting it via the inode as suggested here . Even as root . Any suggestions?
Do this lines exist in the your /etc/default/grub ? If not, add them. GRUB_TIMEOUT=10GRUB_TIMEOUT_STYLE=menu run update-grub afterwards to update /boot/grub/grub.cfg You can check, if the needed changes has happened, by this way: grep -i timeout /boot/grub/grub.cfg Output should be contained such values: set timeout_style=menuset timeout=10 From the grub manual : GRUB_TIMEOUT Boot the default entry this many seconds after the menu is displayed, unless a key is pressed. The default is 5 . Set to 0 to boot immediately without displaying the menu, or to -1 to wait indefinitely. If GRUB_TIMEOUT_STYLE is set to countdown or hidden , the timeout is instead counted before the menu is displayed. GRUB_TIMEOUT_STYLE If this option is unset or set to menu , then GRUB will display the menu and then wait for the timeout set by GRUB_TIMEOUT to expire before booting the default entry. Pressing a key interrupts the timeout. If this option is set to countdown or hidden , then, before displaying the menu, GRUB will wait for the timeout set by GRUB_TIMEOUT to expire. If ESC is pressed during that time, it will display the menu and wait for input. If a hotkey associated with a menu entry is pressed, it will boot the associated menu entry immediately. If the timeout expires before either of these happens, it will boot the default entry. In the countdown case, it will show a one-line indication of the remaining time.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136100/" ] }
373,420
I’ve been reading about sed and found that it was evolved from grep command. According to https://en.wikipedia.org/wiki/Sed , First appearing in Version 7 Unix, sed is one of the early Unix commands built for command line processing of data files. It evolved as the natural successor to the popular grep command. The original motivation was an analogue of grep (g/re/p) for substitution, hence "g/re/s". Sed search and replace sed 's/old/new/' Example.txt Thus, I was wondering if grep can perform the search and replace function just like sed . If this is possible, please let me know how to accomplish the same via grep , and not via sed .
grep is only meant to (and was only initially) print(ing) the lines matching a pattern. That's what grep means (based on the g/re/p ed command). Now, some grep implementations have added a few features that encroaches a bit on the role of other commands. For instance, some have some -r / --include / --exclude to perform part of find 's job. GNU grep added a -o option that makes it perform parts of sed 's job as it makes it edit the lines being matched. pcregrep extended it with -o1 , -o2 ... to print what was matched by capture groups. So with that implementation, even though it was not designed for that, you can actually replace: sed 's/old/new/' with: pcregrep --om-separator=new -o1 -o2 '(.*?)old(.*)' That doesn't work properly however if the capture groups match the empty string. On an input like: XoldYXoldoldY it gives: XnewYXY You could work around that using even nastier tricks like: PCREGREP_COLOR= pcregrep --color=always '.*old.*' | pcregrep --om-separator=new -o1 -o2 '^..(.+?)old(.+)..' | pcregrep -o1 '.(.*).' That is, prepend and append \e[m (coloring escape sequence) to all matching lines to be sure there is at least one character on either side of old , and strip them afterwards.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373420", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236493/" ] }
373,447
In the Cent OS 7, I use the netstat -an to check the network service: [root@localhost etc]# netstat -an | grep ESTABLISHEDudp 0 0 192.168.1.25:41136 61.216.153.106:123 ESTABLISHEDudp 0 0 192.168.1.25:59141 202.112.29.82:123 ESTABLISHEDudp 0 0 192.168.1.25:53680 115.28.122.198:123 ESTABLISHEDudp 0 0 192.168.1.25:34255 42.51.22.35:123 ESTABLISHED You can see up there the ephemeral 41136 port. If a service uses port 3306 we can know it is MySQL, if port is 8080 we can know it is Tomcat, but how about the ephemeral ports? how can we know which service is associated with these ports?
As for ephemeral ports: The Internet Assigned Numbers Authority (IANA) suggests the range 49152 to 65535 (2 15 +2 14 to 2 16 −1) for dynamic or private ports. Many Linux kernels use the port range 32768 to 61000 Looking at the destination on the TCP/IP tuple as in the example you ask: udp 0 0 192.168.1.25:41136 61.216.153.106:123 You can see it is the current machine using an NTP service UDP/123 on a remote server. Or else, it is your machine doing an NTP request to an NTP server in China. Actually, all those 4 lines are connections to NTP servers in China. Usually, with the majority of protocols, when the known port service is on your side (first), you usually are the server receiving the connection, and the ephemeral port is on the right side; when it is the contrary, often it is your server that is using a remote service. (Is your server in China? If not I would worry about possible malware) You can also take the out -n , for resolving IP addresses/DNS and service names, however be aware that it introduces a noticeable lag in a machine/server with many connections (and/or with a slow DNS service). To have a feel of the difference try, I adapted your original netstat output for a possible output without -n : $netstat -a | grep ESTABLISHEDudp 0 0 mylinux:41136 vns1.hinet.net:ntp ESTABLISHEDudp 0 0 mylinux:59141 DNS1.SYNET.EDU.CN:ntp ESTABLISHEDudp 0 0 mylinux:53680 rdns1.alidns.com:ntp ESTABLISHEDudp 0 0 mylinux:34255 ns1.htudns.com:ntp ESTABLISHED
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228116/" ] }
373,460
I just ran into a weird scenario, and I'm not sure if this is a feature, and if not, what sort of security implications does it represent? Likely nothing for grep, but other directory-crawling utilities, potentially? Here's how to reproduce: touch ./-vR grep hi * Notice that everything not hi is returned, recursively.
As for ephemeral ports: The Internet Assigned Numbers Authority (IANA) suggests the range 49152 to 65535 (2 15 +2 14 to 2 16 −1) for dynamic or private ports. Many Linux kernels use the port range 32768 to 61000 Looking at the destination on the TCP/IP tuple as in the example you ask: udp 0 0 192.168.1.25:41136 61.216.153.106:123 You can see it is the current machine using an NTP service UDP/123 on a remote server. Or else, it is your machine doing an NTP request to an NTP server in China. Actually, all those 4 lines are connections to NTP servers in China. Usually, with the majority of protocols, when the known port service is on your side (first), you usually are the server receiving the connection, and the ephemeral port is on the right side; when it is the contrary, often it is your server that is using a remote service. (Is your server in China? If not I would worry about possible malware) You can also take the out -n , for resolving IP addresses/DNS and service names, however be aware that it introduces a noticeable lag in a machine/server with many connections (and/or with a slow DNS service). To have a feel of the difference try, I adapted your original netstat output for a possible output without -n : $netstat -a | grep ESTABLISHEDudp 0 0 mylinux:41136 vns1.hinet.net:ntp ESTABLISHEDudp 0 0 mylinux:59141 DNS1.SYNET.EDU.CN:ntp ESTABLISHEDudp 0 0 mylinux:53680 rdns1.alidns.com:ntp ESTABLISHEDudp 0 0 mylinux:34255 ns1.htudns.com:ntp ESTABLISHED
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79419/" ] }
373,461
Is there a way to open less and have it scroll to the end of the file? I'm always doing less app.log and then pressing G to go to the bottom. I'm hoping there's something like less --end or less -exec 'G' .
less +G app.log + will run an initial command when the file is opened G jumps to the end When multiple files are in play, ++ applies commands to every file being viewed. Not just the first one. For example, less ++G app1.log app2.log .
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/373461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123650/" ] }
373,492
at school the wifi network is wpa2 with peap and mschapv2 requireing a certificate to authenticate and connect along with user name and password i have obtained a copy of the certificate from the school's it technitians in ubuntu to add the certificate i copied it into /usr/share/ca-certificates/extra and then ran sudo dpkg-reconfigure ca-certificates which gided me through the screens below and gave the output however i would now like to move from ubuntu to arch linux but have been un able to add the certificate and connect to the network could anyone please tell me what the equivilent command is in arch.
Use the trust command provided by the p11-kit package: sudo trust anchor --store ~/my-ca-cert.crt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228860/" ] }
373,631
I want to use a high-speed swap device or zram as "page cache" , so that page data can be dropped automatically when the system is out of memory. Apparently, zcache has been discontinued. Is there an available alternative? If yes, which one? bcache doesn't seem to drop data when there's no memory available.
Indeed zcache appears to have been discontinued, as it was removed from kernel 3.11 for being effectively obsolete. The commit message of zcache removal reads staging: zcache: delete it zcache is obsolete and not used anymore, Bob Liu has rewritten it and is submitting it for inclusion through the main -mm tree, as it should have been done in the first place... It appears that Bob Liu's submission never got into mainline. Now, the way I understand it, the page cache is automatically "dropped" (cleared) in an out-of-memory scenario. zcache actually implemented compression so it could maintain more filesystem pages (also known as "vfs cache" or "inode/dentry cache") before being dropped. The Linux kernel has zswap today that implements compressed disk-based swapping, but doesn't compress filesystem pages. I am not aware of a current day alternative for zcache. Perhaps as a workaround, if you are concerned with performance degradation due to filesystem pages being freed, consider tuning vm.vfs_cache_pressure as instructed here . For normal workloads it's safe to just settle with zswap . Additional reading: zram vs zswap vs zcache Ultimate guide: when to use which one Zswap, Zram, Zcache desktop usage scenarios zswap (Arch Linux Wiki) Cleancache and Frontswap (LWN) The Case for Compressed Caching in Virtual Memory Systems
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/373631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230141/" ] }
373,704
Problem : Find how many shells deep I am. Details :I open the shell from vim a lot. Build and run and exit. Sometimes I forget and open another vim inside and then yet another shell. :( I want to know how many shells deep I am, perhaps even have it on my shell screen at all times. (I can manage that part). My solution : Parse the process tree and look for vim and bash/zsh and figure out the current process's depth within it. Does something like that already exist? I could not find anything.
When I read your question, my first thought was $SHLVL . Then I saw that you wanted to count vim levels in addition to shell levels. A simple way to do this is to define a shell function: vim() { ( ((SHLVL++)); command vim "$@");} This will automatically and silently increment SHLVL each time you type a vim command. You will need to do this for each variant of vi / vim that you ever use; e.g., vi() { ( ((SHLVL++)); command vi "$@");}view() { ( ((SHLVL++)); command view "$@");} The outer set of parentheses creates a subshell,so the manual change in the value of SHLVL doesn’t contaminate the current (parent) shell environment. Of course the command keyword is there to prevent the functionsfrom calling themselves (which would result in an infinite recursion loop). And of course you should put these definitionsinto your .bashrc or other shell initialization file. There’s a slight inefficiency in the above. In some shells (bash being one), if you say ( cmd 1 ; cmd 2 ; … ; cmd n ) where cmd n is an external, executable program(i.e., not a built-in command), the shell keeps an extra process lying around,just to wait for cmd n to terminate. This is (arguably) not necessary;the advantages and disadvantages are debatable. If you don’t mind tying up a bit of memory and a process slot(and to seeing one more shell process than you need when you do a ps ),then do the above and skip to the next section. Ditto if you’re using a shell that doesn’t keep the extra process lying around. But, if you want to avoid the extra process, a first thing to try is vim() { ( ((SHLVL++)); exec vim "$@");} The exec command is there to prevent the extra shell process from lingering. But, there’s a gotcha. The shell’s handling of SHLVL is somewhat intuitive:When the shell starts, it checks whether SHLVL is set. If it’s not set (or set to something other than a number),the shell sets it to 1. If it is set (to a number), the shell adds 1 to it. But, by this logic, if you say exec sh , your SHLVL should go up. But that’s undesirable, because your real shell level hasn’t increased. The shell handles this by subtracting one from SHLVL when you do an exec : $ echo "$SHLVL"1$ set | grep SHLVLSHLVL=1$ env | grep SHLVLSHLVL=1$ (env | grep SHLVL)SHLVL=1$ (env) | grep SHLVLSHLVL=1$ (exec env) | grep SHLVLSHLVL=0 So vim() { ( ((SHLVL++)); exec vim "$@");} is a wash; it increments SHLVL only to decrement it again.You might as well just say vim , without benefit of a function. Note: According to Stéphane Chazelas (who knows everything) , some shells are smart enough not to do this if the exec is in a subshell. To fix this, you would do vim() { ( ((SHLVL+=2)); exec vim "$@");} Then I saw that you wanted to count vim levels independently of shell levels. Well, the exact same trick works (well, with a minor modification): vim() { ( ((SHLVL++, VILVL++)); export VILVL; exec vim "$@");} (and so on for vi , view , etc.) The export is necessarybecause VILVL isn’t defined as an environment variable by default. But it doesn’t need to be part of the function;you can just say export VILVL as a separate command (in your .bashrc ). And, as discussed above, if the extra shell process isn’t an issue for you,you can do command vim instead of exec vim , and leave SHLVL alone: vim() { ( ((VILVL++)); command vim "$@");} Personal Preference: You may want to rename VILVL to something like VIM_LEVEL .  When I look at “ VILVL ”, my eyes hurt; they can’t tell whether it’s a misspelling of “vinyl” or a malformed Roman numeral. If you are using a shell that doesn’t support SHLVL (e.g., dash),you can implement it yourself as long as the shell implements a startup file. Just do something like if [ "$SHELL_LEVEL" = "" ]then SHELL_LEVEL=1else SHELL_LEVEL=$(expr "$SHELL_LEVEL" + 1)fiexport SHELL_LEVEL in your .profile or applicable file. (You should probably not use the name SHLVL , as that will cause chaosif you ever start using a shell that supports SHLVL .) Other answers have addressed the issueof embedding environment variable value(s) into your shell prompt,so I won’t repeat that, especially you say you already know how to do it.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/373704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235096/" ] }
373,718
So I just installed and ran rkhunter which shows me green OKs / Not founds for everything except for: /usr/bin/lwp-request , like so: /usr/bin/lwp-request [ Warning ] In the log it says: Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: Perl script text executable I already ran rkhunter --propupd and sudo apt-get update && sudo apt-get upgrade which didn't help. I installed Debian 9.0 just a few days ago and am a newcomer to Linux. Any suggestions on what to do? Edit : Furthermore chkrootkit gives me this: The following suspicious files and directories were found: /usr/lib/mono/xbuild-frameworks/.NETPortable /usr/lib/mono/xbuild-frameworks/.NETPortable/v5.0/SupportedFrameworks/.NET Framework 4.6.xml /usr/lib/mono/xbuild-frameworks/.NETFramework/usr/lib/python2.7/dist-packages/PyQt5/uic/widget-plugins/.noinit /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit/usr/lib/mono/xbuild-frameworks/.NETPortable/usr/lib/mono/xbuild-frameworks/.NETFramework I guess that's a separate question? Or is this no issue at all? I don't know how to check if these files/directories are ok and needed. Edit : Note I once also got warnings for "Checking for passwd file changes" and "Checking for group file changes" even though I didn't change any such afaik. An earlier and later scan showed no warnings - these just showed once. Any ideas?
rkhunter needs to know what package manager you are using. Create or edit /etc/rkhunter.conf.local and add the following line: PKGMGR=DPKG If you are not on Debian or Ubuntu, then change DPKG for your actual package manager. This way, rkhunter will know to expect those executables to be scripts, and not flag the false positive. It will ensure that if the files are tampered with, then a new positive result will show.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
373,720
Given: $ shopt -s extglob$ TEST=" z abcdefg";echo ">>${TEST#*( )z*( )}<<">> abcdefg<< Why is there a space before the letter 'a'? I would expect that the 2nd *( ) would match the space, but it does not seams to do so. I expected the equivalent of: $ echo ">>$(echo -n "${TEST}" | perl -pe "s/^ *z *//g")<<">>abcdefg<< The 2nd *( ) matches if I specify the next following character (which is 'a'): $ shopt -s extglob$ TEST=" z abcdefg";echo ">>${TEST#*( )z*( )a}<<">>bcdefg<< Bash version: GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
${TEST#...} matches the shortest string, which is z followed by zero spaces. You need ${TEST##...} , the longest match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160335/" ] }
373,795
In bash one can do the following: bind -x '"\C-l":ls' to map Ctrl + L to external (ie. system's rather than readline function) ls command to list directory contents. How to do the same in zsh , and preferably with Alt ( \M-l ?) instead of Ctrl as it is seemingly already bound to clear to clear the screen.
% namingthingsishard () { echo; ls; zle redisplay }% zle -N namingthingsishard % bindkey '^l' namingthingsishard % This binds control+l because I don't know what \M-l generates for you; running read -r and then mashing keys might show something suitable to use with bindkey , or run bindkey with no arguments to show what is already set. For more information on bindkey and widgets, see zshzle(1) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
373,798
I am a student. My teacher requests me to get the energy information of servers. I have seen some tips but they does not work. I am using Centos7, and I do not have the file /sys/class/power_supply/.....It is so bad. What should I do?
% namingthingsishard () { echo; ls; zle redisplay }% zle -N namingthingsishard % bindkey '^l' namingthingsishard % This binds control+l because I don't know what \M-l generates for you; running read -r and then mashing keys might show something suitable to use with bindkey , or run bindkey with no arguments to show what is already set. For more information on bindkey and widgets, see zshzle(1) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238050/" ] }
373,812
Hello I am trying to create a simple file that reads in a filename and optionally the number of lines to output but I can't get my script to accept the positional parameter that I have assigned to hold the file name. My Code: ## task_4-1.sh filename [howmany] - optional parameter#### Read a file containing the names of and number of albums of artist in the album collection.# Program will print the N highest lines of Artist with the highest Album count int he collection# Default N is 10 # One arguement for the name of the input file and a second optional line for how many lines to print.#!\bin\bash#Variable to accept the filename will print an error message if this value is left null filename=${1:?"missing."}#Optional parameter that controls how many lines to Output to shellhowmany=${2:-10}sort -nr $filename | head $howmany When I try running the code on the commandline with: task_4-1.sh albumnumber.txt 10 I get the error message head: 10 no such file or directory. If I switch assigning the parameter to this bracket syntax: filename=${filename:?"missing."} Then I get the error message I specified filename: "missing" Don't know what I am doing wrong here.
% namingthingsishard () { echo; ls; zle redisplay }% zle -N namingthingsishard % bindkey '^l' namingthingsishard % This binds control+l because I don't know what \M-l generates for you; running read -r and then mashing keys might show something suitable to use with bindkey , or run bindkey with no arguments to show what is already set. For more information on bindkey and widgets, see zshzle(1) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238063/" ] }
373,831
I generated a PGP key with GnuPG over a year ago. Since I haven't had to really touch it since, I'm extremely foggy on the ins and outs of GPG (though I understand asymmetric key encryption in principle). I had used this key to authenticate SSH logins, right up until accidentally deleted it yesterday. So, today, I set out to generate it again. I run gpg --export-secret-key -a "Ryan Lue" > ~/.ssh/id_rsa , and it prompts me for a password. I enter the password, and out comes the id_rsa file. Now, when I try to SSH into my servers, it throws the following warning: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@Permissions 0644 for 'id_rsa' are too open.It is required that your private key files are NOT accessible by others.This private key will be ignored. So, I obediently chmod 600 ~/.ssh/id_rsa . Then, I try again, and it prompts for a password (actually, since I'm on a Mac, Keychain prompts me for a password). I enter the same password I used to export it, and each time, it fails, spitting out the following error on the command line: Saving password to keychain failed I've also tried adding the key using ssh-agent , and that actually prompts me for the password on the command line: Enter passphrase for /Users/rlue/.ssh/id_rsa: Either way, it keeps rejecting the password. I'm 100% sure I'm entering the same passphrase at these prompts as I do to export it: I've successfully exported the key about a dozen times and failed to authenticate it in use about four dozen times. What am I missing?
OpenPGP (as implemented by GnuPG) and SSH do not share a common key format, although they rely on the same cryptographic principles. GnuPG implements the ssh-agent protocol, though, so you can still use your OpenPGP keys through GnuPG for SSHing into other computers. enable the ssh-agent protocol by adding enable-ssh-support to ~/.gnupg/gpg-agent.conf export SSH_AUTH_SOCK=$HOME/.gnupg/S.gpg-agent.ssh ; you might want to do that in your ~/.profile kill ssh-agent if started and reload gpg-agent ( gpg-connect-agent reloadagent /bye ) export and add your public key to target servers ( ssh-add -L should now contain the familiar SSH public key line for your OpenPGP key) ssh to the target server as with a normal SSH key This also works great with OpenPGP smartcards or USB dongles, I'm using this to protect my SSH key with a YubiKey.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176219/" ] }
373,880
I am trying to mount hdfs on my local machine(ubuntu) using nfs by following the below link:-- https://www.cloudera.com/documentation/enterprise/5-2-x/topics/cdh_ig_nfsv3_gateway_configure.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--7ef4 So,at my machine I installed nfs-common using:- sudo apt-get install nfs-common Then,before mounting I have ran these commands:- rpcinfo -p 192.168.170.52program vers proto port service100000 4 tcp 111 portmapper100000 3 tcp 111 portmapper100000 2 tcp 111 portmapper100000 4 udp 111 portmapper100000 3 udp 111 portmapper100000 2 udp 111 portmapper100024 1 udp 48435 status100024 1 tcp 54261 status100005 1 udp 4242 mountd100005 2 udp 4242 mountd100005 3 udp 4242 mountd100005 1 tcp 4242 mountd100005 2 tcp 4242 mountd100005 3 tcp 4242 mountd100003 3 tcp 2049 nfsshowmount -e 192.168.170.52Export list for 192.168.170.52:/ * after that i tried mounting the hdfs using:-- sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/ But i was getting this error:--- mount.nfs: mount system call failed Then i googled for the problem and installed nfs-kernel-server,portmap using sudo apt-get install nfs-kernel-server portmap After executing the above command,the output for:--- rpcinfo -p 192.168.170.52 is:-- 192.168.170.52: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) and for showmount -e 192.168.170.52 is:--- clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Also the output for:-- sudo service nfs start comes out to be:-- Failed to start nfs.service: Unit nfs.service not found. Please help me with this.
I was testing this issue on CentOS 7. When you encounter such a problem you have to dig deeply. The problem: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) is related to firewall. The command showmount -e IP_server shows all mounts that are available on server. This command works fine, but you have to be careful which port to open. It does not get through the firewall if only port 2049 has been opened. If the firewall on the NFS server has been configured to let NFS traffic get in, it will still block the showmount command. To test if you disable firewall on server you should get rid of this issue. So those ports should be open on server: firewall-cmd --permanent --add-service=rpc-bindfirewall-cmd --permanent --add-service=mountdfirewall-cmd --permanent --add-port=2049/tcpfirewall-cmd --permanent --add-port=2049/udpfirewall-cmd --reload Additional test 2049/NFS port for availability. semanage port -l | grep 2049 - returns SELinux context and the service name netstat -tulpen | grep 2049
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179796/" ] }
373,882
Trying to write a comand that prints these of its parameters, which correspond toregular files containing text “main()” in any of its first 10 lines. What i should you to scan all files, and look for 10 first lines?
I was testing this issue on CentOS 7. When you encounter such a problem you have to dig deeply. The problem: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) is related to firewall. The command showmount -e IP_server shows all mounts that are available on server. This command works fine, but you have to be careful which port to open. It does not get through the firewall if only port 2049 has been opened. If the firewall on the NFS server has been configured to let NFS traffic get in, it will still block the showmount command. To test if you disable firewall on server you should get rid of this issue. So those ports should be open on server: firewall-cmd --permanent --add-service=rpc-bindfirewall-cmd --permanent --add-service=mountdfirewall-cmd --permanent --add-port=2049/tcpfirewall-cmd --permanent --add-port=2049/udpfirewall-cmd --reload Additional test 2049/NFS port for availability. semanage port -l | grep 2049 - returns SELinux context and the service name netstat -tulpen | grep 2049
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/373882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238110/" ] }
373,947
I'm trying to understand the following statement from the OpenVPN manual. https://community.openvpn.net/openvpn/wiki/IPv6 In a routed setup, you cannot use your on-link network; you must use a unique routed network range, just like when routing with IPv4. In short, this is saying that to run IPv6 through open VPN (without hiding it behind a NAT) you need to have two IPv6 /64 blocks. Or at least one block and a completely separate IPv6 address. Can anyone explain why this is? I'm struggling to get my head round this as I have a server with a /64 block assigned to it. From the network's point of view all traffic in that block must be sent to my server (?). On that server all traffic to that block would be sent via VPN if not its own IP. From the VPN client's point of view all traffic to that block would be sent via VPN if not its own IP. I can't see why having the same /64 block assigned to two interfaces would cause a problem where the entire block is assigned to one server.
You cannot have two separate networks that use the same address space. If the uplink ethernet port of your server has the /64, then the OpenVPN interface can't have the same one, because it is a separate network. Technically your server also doesn't have the whole /64. It will have one or more addresses of the /64 configured on its ethernet interface. It has no way to know that your provider has reserved the whole /64 for it. There are also providers that put multiple customers in a shared /64. The server can't know. So to do it properly you need to have a separate /64 for your OpenVPN network, and that /64 has to be routed through your server (notice the "through", it doesn't belong to the server, the server merely routes traffic for it, some of the addresses will belong to VPN clients). That will require your provider to set that up for you. They will configure a /64 on the network between their router and your server (which is the /64 you already have) and configure a route for the other /64 that has your server as the next hop (router). Another solution would be to configure OpenVPN in layer-2 mode, and create an ethernet bridge between your server's ethernet port and the OpenVPN tap interface. That way you won't have two separate networks that each need their separate /64. You will have one network with a bridge in it. One network only needs one /64. You can bridge the existing network to your VPN clients. You'll have to decide whether that is a possible solution for you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/373947", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20140/" ] }
374,012
So I'd like to manage my autostart applications and e.g. disable those which I prefer not to autostart. How can I do that in Debian 9.0? I could not do so via the System Monitor and I'd prefer a GUI over the console. Furthermore it would be nice if such a tool also displayed some information about the apps/processes such as what they do, whether they're safe to disable and e.g. things like whether many have them running as well and whether (many/specific) users have flagged them for being undesired.
There are (at least) two packages in Debian which provide tools to manage startup applications. The first is gnome-tweak-tool ; its “Startup Applications” tab allows you to manage your startup applications in your desktop environment. The second is systemd-ui ; it shows all the configured systemd units and jobs on your system, and allows you to start, stop, restart and reload units. It also displays the description and dependencies of each unit (but not the links to the documentation which may be given in the unit). It doesn’t seem to allow enabling and disabling units though, which is probably what you’re after.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
374,066
I have a file which contains already ordered data and I'd like to re-order the file according to the values in one key, without destroying the order of the data in the other keys. How do I prevent GNU sort from performing row sorting based on the values of keys I have not specified, or how do I specify to GNU sort to ignore a range of keys when sorting? File data.txt: 1 Don't2 C 1 Sort2 B1 Me2 A Expected output: 1 Don't1 Sort1 Me2 C2 B2 A Command: sort -k 1,1 <data.txt Result: unwanted sorting I didn't ask for: 1 Don't1 Me1 Sort2 A2 B2 C
You need a stable sort . From man sort : -s, --stable stabilize sort by disabling last-resort comparison viz.: $ sort -sk 1,1 <data.txt1 Don't1 Sort1 Me2 C 2 B2 A Note that you probably also want a -n or --numeric-sort if your key is numeric (for example, you may get unexpected results when comparing 10 to 2 with the default - lexical - sort order). In which case it's just a matter of doing: sort -sn <data.txt No need to extract the first field as the numeric interpretation of the whole line will be the same as the one of the first field.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/374066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238243/" ] }
374,084
In my environment, actual .ssh directory exists on external device and links from it to ~/.ssh with a symbolic link. Using openssh as client is working without problem, but sshd does not allow to authentication with public key inside it. Is there any method to use .ssh directory on external device? journalctl -u sshd Authentication refused: bad ownership or modes for directory /pool/secure/ssh permissions $ ls -ld ~/.sshlrwxrwxrwx. 1 foobar foobar 28 Mar 7 19:59 .ssh -> /pool/secure/ssh$ ls -l /pool/secure/ssh-rw------- 1 foobar foobar 381 Jun 29 15:01 authorized_keys-rw------- 1 foobar foobar 292 Jun 29 15:01 config-rw-------. 1 foobar foobar 5306 Jun 23 02:16 known_hosts$ ls -ld /pool/secure/sshdrwx------. 2 foobar foobar 8 Jun 29 15:01 version $ ssh -V OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 Added at 2017-06-29 (Tips) OpenBSD and FreeBSD can modify access permission of symlink by chmod. But Linux does not have system call doing that operation. stat(2) follow the link. Base Specifications Issue 7 unspecified the value of the file mode bits returned in the st_mode field. Solved at 2017-06-30 auth_secure_path have the answer. This function check permissions of file and directory, the range include parent directory. It continues to check if correct permissions (only owner can write) are set, until passing the home or root . ex) general environment/home/foobar/.ssh (raise error if group and other can write)/home/foobar (same)break!ex) special environment (like me)/home/foobar/.ssh -> /pool/secure/ssh/pool/secure/ssh (raise error if group and other can write)/pool/secure (same)/pool (same)/ (same)break!
It is a permissions issue. You need to check permissions for all directories above and including foobar 's home, and also all directories above the target .ssh directory on your external device. Apart from foobar and the target .ssh directories, all others must be owned by root and not writeable by anyone else. You may also have an SELinux issue. You can check the SELinux security context of files and directories with the -Z flag: [sheepd0g@dogpound ~]$ ls -ZAdrwxr-xr-x. root root system_u:object_r:home_root_t:s0 ..drwxrwxr-x. sheepd0g sheepd0g unconfined_u:object_r:user_home_t:s0 20170620-auditlogs-rw-rw-r--. sheepd0g sheepd0g unconfined_u:object_r:user_home_t:s0 random.datdrwx------. sheepd0g sheepd0g unconfined_u:object_r:ssh_home_t:s0 .ssh A couple things to note: The period at the end of the permission mode fields means SELinux context is active for that file. Notice the Type field for the .ssh folder is different (ssh_home_t). SELinux objects, types, policies, and settings may not be the same across distributions, or even major versions. What works for RHEL6 may not for, say SUSE 10 or Debian 6 (I'm not sure Debian 6 even has SELinux enforcing, out of the box...) Regardless, this is a good place to look if all else fails. You can check if SELinux is in enforcing mode easily enough with the following: [sheed0g@dogpound ~]$ sudo getenforceEnforcing If you suspect SELinux us the issue, you can switch SELinux to Permissive mode (policies are enabled, but no action is taken -- just logging/auditing of actions): [sheepd0b@dogpound ~]$ sudo setenforce 0[sheepd0b@dogpound ~]$ sudo getenforcePermissive If your issue goes away, this is likely the problem. Please note, there is A LOT more complexity to SELinux than what is represented here. If your .ssh/ is on an NFS share you will be required to make more changes with boolean settings for SELinux. Here are two good references for SELinux: CentOS wiki entry on SELinux Red Hat Enterprise Linux 7 SELinux guide
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159784/" ] }
374,103
I recently found that (among others) network shares can be automatically mounted on access. In Ubuntu there are two options either using autofs or the automount feature of systemd. Could perhaps someone tell me what the differences (apart from different configuration) between the two options are. Autofs seems to be more flexible as one can configure scripts for the automount locations. Is this possible using the systemd automount as well? UPDATE 2017-07-25: Just a brief update for everyone stumbling across this question. I went with the systemd automount option as it is way more convenient and easier to configure, while providing nearly the same functionality. If there is an fstab entry with noauto and x-systemd.automount reloading the systemd daemon ( systemctrl daemon-reload ) will generate an automount systemd unit under /run/systemd/generator/ (at least this is the path where it get's generated under ubuntu 16.04). The unit will be named after the mountpoint of the fstab entry. That is, if you create an automountpoint for /media/network/someserver/share there will be a systemd automount unit media-network-someserver-share.automount . This automount unit can then be (re-)started to activate the mountpoint using systemctl restart media-network-someserver-share.automount . And you're done.
I think you've largely answered your own question. Systemd thinks about most things in a "just in time" manager, so adding automount was an obvious extension. The configuration uses a common style, but isn't super flexible. Autofs is the old way we used to do this. It's flexible, the config is kind of complex/weird, and it's probably not installed by default. You probably want systemd unless your needs are complex. A simple automount setup guide is here: http://blog.tomecek.net/post/automount-with-systemd/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176565/" ] }
374,109
I'm trying to test if ~/.ssh/id_rsa is actually password protected. When you run ssh-keygen you can choose an empty password, and I'm trying to detect this. Is that possible with a one-liner?
If you execute: ssh-keygen -y -f ~/.ssh/name_of_key you will get key printed if there is no password like this: ssh-keygen -y -f ~/.ssh/id_dsassh-dss AAAAB3NzaC1kc3M.... If there is password of the key you will be asked for it
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
374,127
To delete the line with # , the following line can work: sed -i "/^\(\\s\)*\\#/d" but what I want is to delete lines that start with # and not with #! .
sed -i -e '/^\s*#\([^!]\|$\)/d' Where: ^ start of line \s* zero or more whitespace characters # one hash mark \([^!]\|$\) followed by a character which is not ! or end of line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238292/" ] }
374,163
I am working on linux server. I want to know whether there exists a PID for each tomcat service running on any server. If there exists a PID for a particular tomcat service, then can we find the service name corresponding to that PID? Can we list all the tomcat services running on the server?
sed -i -e '/^\s*#\([^!]\|$\)/d' Where: ^ start of line \s* zero or more whitespace characters # one hash mark \([^!]\|$\) followed by a character which is not ! or end of line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237967/" ] }
374,167
I need a script or command that prints a number of directories which name begins from "lib" in whole directory subtree. I was trying to do it using find, grep, and wc but can't scan all directories. How to do it?
find . -type d -name lib\* -exec echo x \; | wc -l
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374167", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238110/" ] }
374,199
Assume a text string my_string $ my_string="foo bar=1ab baz=222;" I would like to extract the alphanumeric string between keyword baz and the semi-colon. How do I have to modify the following grep code using regex assertions to also exclude the trailing semi-colon? $ echo $my_string | grep -oP '(?<='baz=').*'222;
Unless the string that you want to extract may itself contain ; , the simplest thing is probably to replace . (which matches any single character) with [^;] (which matches any character excluding ; ) $ printf '%s\n' "$my_string" | grep -oP '(?<='baz=')[^;]*'222 With grep linked to libpcre 7.2 or newer, you can also simplify the lookbehind using the \K form: $ printf '%s\n' "$my_string" | grep -oP 'baz=\K[^;]*'222 Those will print all occurrences in the string and assume the matching text doesn't contain newline characters (since grep processes each line of input separately).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193926/" ] }
374,223
Is it possible to install apt-get on redhat? I'm under the impression that you can't, but I just wanted to be sure. If it is possible, life would be MUCH easier when installing various programs, especially because yum really doesn't have as many programs available it seems. Here's what I've tried (just for the record): I've been trying to install apt-get following these instructions, but redhat doesn't have dpkg, so I'm back to square 1. I am asking this question because I'm having some difficulty installing a plugin for pidgin (pidgin-sipe) because yum install libglib2.0-dev is failing, which is evidence to me that having apt-get might be a worthwhile investment. Any suggestions?
TL;DR apt usually doesn't work out of the box with Enterprise Linux based distros and you won't find many repos that work for you anyway. If you are having trouble finding the software you want on Red Hat, it is because your repositories don't have the packages. What you want to look into is adding different repositories. For Red Hat Enterprise Linux, the first repo to usually add is Extra Packages For Enterprise Linux (EPEL) hosted at The Fedora Project. You will likely find a LOT of what you are missing in that repo. More information: While it is certainly possible to install the apt package management utilities on an Enterprise Linux system, that does not mean you will be able to do anything with it once you are done. The issue here is the apt utility is a program that works with published directories of software packages (repositories is the usual name for me, but it can vary). Yum , rpm , dnf , emerge , etc. are all utilities on varying *NIX distributions that do the same thing. But they don't offer the software themselves, they are configured to query the repositories and provide packages from them. The other issue is that the common repositories you find online are often created with configured to work with the native package management utilities for the OS they are offering software for. You could probably configure apt on your RHEL7 system to query the Debian repos, but the software likely would be incompatible with your system because of the differences in how Debian and Red Hat build, layout, structure, and configure their operating systems. It's like trying to install Mac OS X software on your Linux system. They are both technical *NIX based, but they vary widely in how they function.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238345/" ] }
374,349
I recently purchased a laptop, an Acer Aspire R15 with these specs: 17 7500U, 12GB DDR4, 256GB SSD, GTX 940MX 2GB. It comes preinstalled with Windows 10, but I wanted to install Debian in a dual boot configuration for programming. Anyways, I installed Debian on the C: drive in a separate partition, and installed grub. When I restarted the PC, it went straight into Windows 10 without launching grub. I did several google searches and ended up trying this, but this method did not work and yet again, my PC booted straight into Windows. I then tried this, which also did not work. I then tried directly installing reFIND through debian after booting into debian with a usb flash drive of refine to try to see if I could use refind as a substitute for Grub, but that also did nothing. TL;DR: My pc boots directly into windows instead of loading grub, and I tried every method I found to fix this, but none of them worked. Can someone help me get my pc to boot with grub?
Your UEFI is booting the first thing it sees, which happens to be the Windows 10 bootloader. You should change this to GRUB/rEFInd as follows: On Windows 10, boot into UEFI settings as follows: Open Settings Update and Security Recovery Advanced Startup > Restart now Troubleshoot Advanced Options UEFI Firmware settings Go to the boot tab of the UEFI settings Move the Linux bootloader (GRUB or rEFInd) above the Windows 10 bootloader (instructions to do this are usually at the bottom of the screen) Save and reboot In my experience, you do not need to disable secure boot, enable legacy mode, etc. Now, you should be able to use your new bootloader to boot Linux. While most distros add an entry to boot Windows 10 as well, you may need to do this manually to boot to Windows 10.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/374349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238258/" ] }
374,499
I would like to replace everything but my ::ID and [ID2], but could not really find a way to do it with sed and keep to match, any suggestion? For example: TRINITY_DN75270_c3_g2::TRINITY_DN75270_c3_g2_i4::g.22702::m.22702 [sample] I would like to have: TRINITY_DN75270_c3_g2_i4[sample] Any suggestion?
For a given input as provided, this sed expression seems to do what you ask: $ cat input`>TRINITY_DN75270_c3_g2::TRINITY_DN75270_c3_g2_i4::g.22702::m.22702 [sample]`$ sed 's/^.*::\([A-Z_0-9a-z]*\)::.*\[\(.*\)\].*/\1[\2]/' inputTRINITY_DN75270_c3_g2_i4[sample] The magic is in using regular expression groups, and two backreferences to reconstruct the desired output. To expound: NODE EXPLANATION-------------------------------------------------------------------------------- ^ the beginning of the string .* any character except \n (0 or more times (matching the most amount possible)) :: '::' \( group and capture to \1: [A-Z_0-9a-z]* any character of: 'A' to 'Z', '_', '0' to '9', 'a' to 'z' (0 or more times (matching the most amount possible)) \) end of \1 :: '::' .* any character except \n (0 or more times (matching the most amount possible)) \[ '[' ( group and capture to \2: .* any character except \n (0 or more times (matching the most amount possible)) ) end of \2 \] ']' .* any character except \n (0 or more times (matching the most amount possible)) So \1 is the first key you wanted to extract, and \2 is whatever is in the square braces afterward. Is is then reconstructed by \1[\2]/ , creating your desired output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238552/" ] }
374,673
I've got a folder with many files (xyz1, xyz2, all the way up to xyz5025) and I need to run a script on every one of them, getting xyz1.faa, xyz2.faa, and so on as outputs. The command for a single file is: ./transeq xyz1 xyz1.faa -table 11 Is there a way to do that automatically? Maybe a for-do combo?
for file in xyz*do ./transeq "$file" "${file}.faa" -table 11done This is a simple for loop that will iterate over every file that starts with xyz in the current directory and call the ./transeq program with the filename as the first argument, the filename followed by ".faa" as the second argument, followed by "-table 11".
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/374673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238538/" ] }
374,683
I have a 32 GB USB flash drive. When deleting files from the drive where the drive is plugged into a Ubuntu 16 laptop, it creates a folder called '.Trash-1000' This .Trash-1000 folder contains two folders which are 'file' and 'info' where file contains the files I have deleted and info contains metadata about those files. The issue is this .Trash-1000 folder takes up space because it holds a copy of the deleted file. I then have to eventually delete the .Trash-1000 folder when it starts filing up after multiple deletes. Is there a way to disable this feature on the USB drive?
Have a look at this article . According to the article, Ubuntu will create such folders when a file is deleted from a USB drive. Presumably this would allow a file to be restored if you accidentally deleted it. It contains the following solution: Don't use the delete button only (Otherwise the .Trash-1000 folder will be created) Press the key combination shift+delete together to delete then Ubuntu won't create a .Trash-1000 folder. (Note: If you delete files and folders this way they are gone forever!) As alternative you can also use the command line's rm command which will also delete the file directly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237628/" ] }
374,691
I need to run a program using 2 files as input expecting 1 output having 6000 files ranging from abc0000.faa/abc0000.fna to abc6000.faa/abc6000.fna. I also need the output file to have the same file name as the inputs but .paml extension. This is an example of the full command just for files 0000. ./pal2nal.pl abc0000.faa abc0000.fna -codontable 11 -output paml > abc0000.paml Is there a way of running the same command for all files automatically? Something like a for-in-do? Thanks!
Have a look at this article . According to the article, Ubuntu will create such folders when a file is deleted from a USB drive. Presumably this would allow a file to be restored if you accidentally deleted it. It contains the following solution: Don't use the delete button only (Otherwise the .Trash-1000 folder will be created) Press the key combination shift+delete together to delete then Ubuntu won't create a .Trash-1000 folder. (Note: If you delete files and folders this way they are gone forever!) As alternative you can also use the command line's rm command which will also delete the file directly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238538/" ] }
374,748
I'm unable to update my system having two installations of Ubuntu: One is version 16.04 and the other version is 17.04. In both, I'm getting the same error. For ex., in Ubuntu 16.04, I run software updater and get the result as shown below. I did wait for some time but the updater didn't proceed ahead. Then I pressed the Stop button and it took me to the below pop-up. Then I pressed the button Install now and it took me to the next pop-up as shown below. I waited here for some time but it got stuck there. I'm unable to update in either installation. What is the solution as I can't do any update? (Also would like the viewer to see if unauthorized tampering, remotely or otherwise, can result in this error. If so, how to solve the issue?) If I fail to update, I may be compelled to take the trouble of reinstalling both the installations from scratch which I would like to avoid. Referring to the 3rd picture above that mentioned "installing updates": It did proceed ahead and updated completely. But after rebooting and running again the software updater , I came across a new issue. Now on running the software updater , it messages check your Internet connection . I've posted the question here .
I would first try a softer way. Stop the automatic updater: sudo dpkg-reconfigure -plow unattended-upgrades At the first prompt, choose not to download and install updates. Make a reboot. Make sure any packages in an unclean state are installed correctly: sudo dpkg --configure -a Get your system up-to-date: sudo apt update && sudo apt -f install && sudo apt full-upgrade Turn the automatic updater back on, now that the blockage is cleared: sudo dpkg-reconfigure -plow unattended-upgrades Select the package unattended-upgrades again.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/374748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46723/" ] }
374,804
I usually delete directories by using rm : rm -r myDir However I am aware of another command, rmdir , which seems to do the job just as well: rmdir myDir What is the difference between these two commands and when should each be used?
rm -r removes a directory and all its contents; rmdir will only remove a directory if the directory is empty. I like to use the following to remove a directory and all its contents: rm -rf <directory_to_be_removed>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/374804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238522/" ] }
374,823
I am connected with ssh and I want to copy a directory from my local to our remote server; how can I do that?I have read several post using scp but that doesn't worked for me. Some posts suggested using rsync but in my case I just want to copy one directory.
If you want to copy a directory from machine a to b while logged into a: scp -r /path/to/directory user@machine_b_ipaddress:/path/to/destination If you want to copy a directory from machine a to b while logged into b: scp -r user@machine_a_ipaddress:/path/to/directory /path/to/destination
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/374823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238780/" ] }
374,975
I have 2 commands that need to run every hour, so I put these in /etc/cron.hourly/hrcron file, in following format command1; command2 It should've worked in my opinion, but does anyone have any idea what's stopping it from running? I'm running CentOS 6.8.
Files placed in /etc/cron.hourly , cron.daily and cron.monthly need to be executables. If you place a text file with a single line as shown in your question into that directory, it cannot be run at all for the same reason that you could not run such a file as a shell script from the command line, either. What you mean to say is this: #!/bin/shcommand1command2 You could concatenate the second and third lines with a semicolon, but it simply isn't necessary here. It's a full-on shell script, so you don't need to "stack" commands in that way. Also, be sure to mark the script executable, else it still won't run. If all of this seems odd to you, based on your knowledge of crontab entries , realize that executables in these directories are typically run by either anacron or run-parts , not by cron . Thus, the information from man 5 crontab doesn't really apply here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/374975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
375,002
Say I pick some png file and run the following command: xclip -selection clip -t image/png image.png I now have that image in my clipboard. If I paste it somewhere that accepts an image, it is pasted as expected and all is good. Now instead consider that I press ctrl+V while being in the firefox address bar or in the text field I'm typing this in. The binary contents of the file are pasted verbatim into the text field, in some cases causing the browser to hang for a while. Of course I know it doesn't make sense to paste an image there, but I sometimes do it accidentally, and then it causes problems. When I instead paste an image I copied using firefox's "copy image" button, it doesn't get pasted when I try to paste it, so it must be possible to store it in the clipboard to allow for this behaviour. How can I place an image in the clipboard without making the image get pasted verbatim as binary data into text fields? If it's possible to somehow place both an image and a text string (such as the path to the image or something) in the clipboard and have it pick the appropriate one when pasting, that would be awesome.
I copied an image into clipboard with xclip like you did and here's what list of targets I got: > xclip -selection clip -t TARGETS -oTARGETSimage/png and now if I copy an image from a web page I get this: > xclip -selection clip -t TARGETS -oTIMESTAMPTARGETSMULTIPLESAVE_TARGETStext/htmltext/_moz_htmlinfotext/_moz_htmlcontextimage/pngimage/jpegimage/x-iconimage/x-icoimage/x-win-bitmapimage/vnd.microsoft.iconapplication/icoimage/icoimage/icontext/icoimage/tiffimage/bmpimage/x-bmpimage/x-MS-bmp and for example setting target as text/html gives such output > xclip -sel c -t text/html -o <img src="..." alt="...">> So obviously it's xclip problem as stated in prev answer
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91129/" ] }
375,095
Also, what is the difference between pscp, psftp and scp, sftp? I'm guessing PuTTY is originally made for Windows which doesn't have these commands by default, am I right? If that's the case, why would anyone use PuTTY on Linux?
PuTTY is a terminal emulator (able to run shells, which in turn run commands), while the usual SSH application is a shell (not a terminal emulator). PuTTY has been ported to Unix (and Unix-like) systems as pterm . scp is a special case: a program use for copying a few files via an ssh connection. PuTTY on Windows has a similar program, but there's no need for that in the Unix port. sftp (and psftp ...) would be analogous to ftp : specialized programs used for copying many files. Their usefulness depends on what you need to do: some use scp far more often than sftp , and vice versa.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238971/" ] }
375,264
While one can download a .deb package using apt download $package.deb there doesn't seem to be any way to see the metadata of that file. I mean by metadata something like - [$] aptitude show dgit Package: dgit Version: 3.10State: not installedPriority: optionalSection: develMaintainer: Ian Jackson <[email protected]>Architecture: allUncompressed Size: 309 kDepends: perl, libwww-perl, libdpkg-perl, git-core, devscripts, dpkg-dev, git-buildpackage, liblist-moreutils-perl, coreutils (>= 8.23-1~) | realpath, libdigest-sha-perl, dput, curl, apt, libjson-perl, ca-certificates, libtext-iconv-perl, libtext-glob-perlRecommends: ssh-clientSuggests: sbuildDescription: git interoperability with the Debian archive dgit (with the associated infrastructure) makes it possible to treat the Debian archive as a git repository. dgit push constructs uploads from git commits dgit clone and dgit fetch construct git commits from uploads. Hopefully there is a way to view the depends, recommends etc. I had viewed also using less in various forums to do the same thing but couldn't get it to work as well.
dpkg-deb , which is part of dpkg and therefore always available, can show all the control information for a binary package using only its .deb file: $ dpkg-deb -I joystick_1.6.0-2_amd64.deb new debian package, version 2.0. size 49454 bytes: control archive=1509 bytes. 892 bytes, 24 lines control 1887 bytes, 30 lines md5sums Package: joystick Version: 1:1.6.0-2 Architecture: amd64 Maintainer: Stephen Kitt <[email protected]> Installed-Size: 176 Depends: libc6 (>= 2.15), libsdl1.2debian (>= 1.2.11) Recommends: evtest, inputattach Breaks: stella (<< 4.7.2) Replaces: stella (<< 4.7.2) Section: utils Priority: extra Homepage: https://sourceforge.net/projects/linuxconsole/ Description: set of testing and calibration tools for joysticks Some useful tools for using joysticks: evdev-joystick(1) - joystick calibration tool ffcfstress(1) - force-feedback stress test ffmvforce(1) - force-feedback orientation test ffset(1) - force-feedback configuration tool fftest(1) - general force-feedback test jstest(1) - joystick test jscal(1) - joystick calibration tool . evtest and inputattach, which used to be part of this package, are now available separately. There are quite a few different options to select the content to display, from the package’s file listing to specific (binary) control files; see man dpkg-deb for details.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/375264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
375,309
How would you output the diagonal of a file? e.g I got a file with the following inside. 1,2,3,4,56,7,8,9,01,2,3,4,56,7,8,9,01,2,3,4,5 The output would be supposed to look like: 1 7 3 9 5 or something like that. I can output a column via cut (cut -d "," -f5 filename), but I am unsure what to write in order to output the diagonal only.
awk solution, not as elegant as @don_chrissti solution, but works where not a square. awk -F, '{a=a$++n" "}END{print a}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239128/" ] }
375,387
I want to trace the networking activity of a command, I tried tcpdump and strace without success. For an example, If I am installing a package or using any command that tries to reach some site, I want to view that networking activity (the site it tries to reach). I guess we can do this by using tcpdump. I tried but it is tracking all the networking activity of my system. Let's say if I run multiple networking related commmands and I want to track only particular command networking activity, that time it is difficult to find out the exact solution. Is there a way to do that? UPDATE: I don't want to track everything that goes on my network interface.I just want to track the command (for an example #yum install -y vim) networking activity. Such as the site it tries to reach.
netstat for simplicity Using netstat and grepping on the PID or process name: # netstat -np --inet | grep "thunderbird"tcp 0 0 192.168.134.142:45348 192.168.138.30:143 ESTABLISHED 16875/thunderbirdtcp 0 0 192.168.134.142:58470 192.168.138.30:443 ESTABLISHED 16875/thunderbird And you could use watch for dynamic updates: watch 'netstat -np --inet | grep "thunderbird"' With: -n : Show numerical addresses instead of trying to determine symbolic host, port or user names -p : Show the PID and name of the program to which each socket belongs. --inet : Only show raw, udp and tcp protocol sockets. strace for verbosity You said you tried the strace tool, but did you try the option trace=network ?Note that the output can be quite verbose, so you might need some grepping. You could start by grepping on "sin_addr". strace -f -e trace=network <your command> 2>&1 | grep sin_addr Or, for an already running process, use the PID: strace -f -e trace=network -p <PID> 2>&1 | grep sin_addr
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/375387", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234635/" ] }
375,392
Hi I'm trying out below command to match month and day (of 6 days ago, which is Jun 29) to search a directory using AWK, but the result is always '0' instead it is supposed to be around 1800. ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +"\"%b\"")", -v day="$(date --date="6 days ago" +%d)" '$6 ==month && $7==day {print $9}'|wc -l tried this also ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +%b)", -v day="$(date --date="6 days ago" +%d)" '$6 ==month && $7==day {print $9}'|wc -l but it is working if I hardcode Month ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +"\"%b\"")", -v day="$(date --date="6 days ago" +%d)" '$6 =="Jun" && $7==day {print $9}'|wc -l Please suggest what I'm missing in the code?
netstat for simplicity Using netstat and grepping on the PID or process name: # netstat -np --inet | grep "thunderbird"tcp 0 0 192.168.134.142:45348 192.168.138.30:143 ESTABLISHED 16875/thunderbirdtcp 0 0 192.168.134.142:58470 192.168.138.30:443 ESTABLISHED 16875/thunderbird And you could use watch for dynamic updates: watch 'netstat -np --inet | grep "thunderbird"' With: -n : Show numerical addresses instead of trying to determine symbolic host, port or user names -p : Show the PID and name of the program to which each socket belongs. --inet : Only show raw, udp and tcp protocol sockets. strace for verbosity You said you tried the strace tool, but did you try the option trace=network ?Note that the output can be quite verbose, so you might need some grepping. You could start by grepping on "sin_addr". strace -f -e trace=network <your command> 2>&1 | grep sin_addr Or, for an already running process, use the PID: strace -f -e trace=network -p <PID> 2>&1 | grep sin_addr
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/375392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234425/" ] }
375,405
I'm trying to achieve something like this:If user will run make build_x version=1.0 then show Building version 1.0 , else ( make build_x without version param) show Building version latest . But what is the key here - I need to version be - by default value (to be able to use it properly with git ). This is why I use additional variable version_info for replacing - with latest . So my code looks like this: build_x: $(eval version ?= -) $(eval version_info = ${if ["${version}" == "-"], "latest", "${version}"}) ${INFO} "Doing checkout according to version $(version_info)..." I know that probably issue is with this condition given to if . Any ideas?
This works for me: version ?= -ifeq (-,$(version)) version_info = latestelse version_info = $(version)endifbuild_x: @echo version_info = $(version_info) It sets version to - if unset, then fills in version_info appropriately. Using target-specific variables, and a one-liner variant: build_x: version ?= -build_x: version_info = $(if $(version:-=),$(version),latest)build_x: @echo version_info = $(version_info) This works as follows (see the overall GNU Make documentation ): version ?= - sets version to - if it’s not already set if checks its first argument , evaluates it to see if it’s empty or not, and is replaced with the second argument if the first is non-empty, and the third if it is $(version:-=) evaluates version , replacing - with the empty string ( : introduces the replacement, the search key is the text before = , the replacement is the text after = )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239195/" ] }
375,411
I am currently setting up my Linux system to make some daily tasks easier. I'd like to configure my SSH to be able to jump hosts using the terminal. I read about LocalForward as well as ProxyJump. The goal is to connect to the first server, tunnel the connection over it and then connect to the second server (as the second server is in a zone I can only reach from the first server). Now what I did was the following snippet inside my ~/.ssh/config file: Host tunnel HostName <firstServer> IdentityFile ~/.ssh/example.key LocalForward 9906 <secondServer>:22 User helloWorld If I now connect to the server using "ssh tunnel" I can successfully connect to the first server. If I now use telnet to check on the second server using "telnet secondServer 9906" I can see that SSH is running on it. If I now try to SSH into the second server using "ssh localhost:9906" I get the information that the hostname couldn't be resolved (same thing for 127.0.0.1:9906). Afterwards I read about the option "ProxyJump" and tried the following: Host tunnel HostName <firstServer> ProxyJump <secondServer>:22 User helloWorld However, the connection never goes through. It gets stuck on "connection to ". Am I missing something obvious here? Maybe I misunderstand the basic concept of the whole SSH forwarding thing? I am used to using Putty but I recently made the jump to Linux and would like to set everything up appropriately.
This ~/.ssh/config will ProxyJump through jump to the target , and bind a port all the way to target : Host jump HostName <server-ip> User user-name IdentityFile ~/.ssh/key.pem LocalForward 8888 localhost:8888Host target HostName <server-ip> User user-name IdentityFile ~/.ssh/key.pem ProxyJump jump LocalForward 8888 localhost:8888 Usage: ssh target ssh -v target # see verbose debugging
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239202/" ] }
375,433
I am using Yocto to build image for a board. I have disable the root user.I am adding a new user, lets say adminuser1 . As it looks to me there are two options to make adminuser1 as admin. Add adminuser1 to sudoers in /etc/sudoers Create a new file /etc/sudoers.d/0001_admin1 and add a line adminuser1 ALL=(ALL) ALL The default /etc/sudoers has the sudo group commented # %sudo ALL=(ALL) ALL I am trying to understand, as to which one is a better approach in terms of security: Shall I add adminuser1 to sudo group and uncomment the # %sudo ALL=(ALL) ALL in /etc/sudoers Adding /etc/sudoers.d/0001_admin1 and adding only adminuser1 ALL=(ALL) ALL in the file.
The choice between sudoers and sudoers.d has nothing to do with security, but everything with maintainability. By uncommenting the sudo group line in /etc/sudoers , you can add all users that need to have sudo access to the sudo group. This may or may not be easier to do than adding a new file in sudoers.d , depending on your setup. However, changing the shipped configuration file may make things harder (e.g., if there is an update of your distribution which would overwrite the sudoers file, you have to ensure that your change is retained). By adding a file to /etc/sudoers.d , you don't have the update issue that I hint to above, but then if you there explicitly add configuration for adminuser1 rather than the sudo group, adding more users to have sudo rights will require more files to be added to /etc/sudoers.d ; this may or may not be more involved than just adding them to the right group. There is no one way which is "best"; and there certainly isn't a security issue based on "which file do I configure sudo rights in". Just consider the upsides and/or downsides of both methods, and use whichever works best for your use case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234128/" ] }
375,434
I'm having a bit of trouble, I'm trying to do a script that will create a list of files and dirs older than 7 days and delete the ones that are not in the list tokeep.list.My first problem is that if I put a dir on the nodelete list, but this dir have old files inside, the files didn't get deleted, so far the only solution that I was able to do it, is to run one command for directories and other to files, it's pretty ugly, I'm no developer. LogFile=/users/nordic/housekeep.logsource config.cfgexec &> >(tee $LogFile)echo "Starting Housekeep files at $host $timestamp"echo "Creating list of directories to delete"cd $dirfind $dir/* -maxdepth 1 -type d -mtime +$days > /users/nordic/todeleteif [ $? -eq 0 ]; then echo OKelse echo FAILfiecho "deleting directories to listed on todelete file"dels=`cat /users/nordic/todelete`readarray -t keeps < /users/nordic/tokeepfor keep in "${keeps[@]}"; do dels=`echo "$dels" | grep -v "$keep"`doneecho "$dels" > /users/nordic/todeletereadarray -t dels < /users/nordic/todelete; for del in "${dels[@]}"; do rm -rv "$del"; doneresult=$?if [ $result -eq 0 ]; then echo SUCCESS |tee /users/nordic/reselse echo FAILED |tee /users/nordic/resfifind $dir/* -maxdepth 1 -type f -mtime +$days -print -delete#SUBJECT="Automated Housekeep $host $resu"#TO="[email protected]"#MESSAGE="$LogFile"#mailx -s "$SUBJECT" -r "info<[email protected]>" $TO < $MESSAGE I have a config.cfg file with the variables, what I'm trying to do is create a list of files and names of dirs to keep, and delete the rest.any suggestion is welcome.
The choice between sudoers and sudoers.d has nothing to do with security, but everything with maintainability. By uncommenting the sudo group line in /etc/sudoers , you can add all users that need to have sudo access to the sudo group. This may or may not be easier to do than adding a new file in sudoers.d , depending on your setup. However, changing the shipped configuration file may make things harder (e.g., if there is an update of your distribution which would overwrite the sudoers file, you have to ensure that your change is retained). By adding a file to /etc/sudoers.d , you don't have the update issue that I hint to above, but then if you there explicitly add configuration for adminuser1 rather than the sudo group, adding more users to have sudo rights will require more files to be added to /etc/sudoers.d ; this may or may not be more involved than just adding them to the right group. There is no one way which is "best"; and there certainly isn't a security issue based on "which file do I configure sudo rights in". Just consider the upsides and/or downsides of both methods, and use whichever works best for your use case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239214/" ] }
375,520
I have a string like "www.mysite.com" in $site variable.In MySQL the Permitted characters in unquoted identifiers are (more info: https://dev.mysql.com/doc/refman/5.7/en/identifiers.html ): ASCII: [0-9,a-z,A-Z$_] (basic Latin letters, digits 0-9, dollar,underscore) Extended: U+0080 .. U+FFFF However for me now would be enough to do this regular expression: 's/[^0-9a-zA-Z\$ ]/ /g' I would like to replace invalid characters of $site to make valid Schema Object Name (like database name) with underscore. Replace should be done with Perl regex.In this example the . should be replaced with _ In Bash: site="www.mysite.com"mysql_db_name= ??? My problem is, that I don't know: How to input $site to Perl regexp to do the replacements, then assign result to $mysql_db_name variable? Thanks!
mysql_db_name=$(printf %s\\n "$site" | perl -lpe 'y/0-9a-zA-Z$_/_/c') Now since you know Perl well, no need for any explanations. mysql_db_name=${site//[!a-zA-Z_$0-9]/_}mysql_db_name=$(perl -se 'print y/0-9a-zA-Z$/_/cr' -- -_="$site")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208148/" ] }
375,525
I would like to scan a whole file tree and do two replacements any line with two matches, i.e.: printf("Hello WORLD! %s, %d\n",bcm_errstr(rv),var);dprintf("kjhgjkhfkhgfjgd %s\n",bcm_errstr(rv)); should become printf("Hello WORLD! %d, %d\n",rv,var);dprintf("kjhgjkhfkhgfjgd %d\n"rv); I tried the following without success ( sed.c being my test file containing two lines that will match the query): p$ grep printf | grep "%s" | grep -rl bcm_errmsg\(rv\) sed.c | xargs sed -i -e 's/%s/%d/' -e 's/bcm_errstr\(rv\)/rv/' I use grep instead of find because the file names are unknown but I'm looking at the file contents instead. Contents of sed.c : $ cat sed.c printf("kjhlkjhlkjh%dkjhgljhglj\n",bcm_errmsg(rv)); dprintf("HELLO WORLD %d %d\n",test,bcm_errmsg(rv)); i.e. I want to apply the two sed replacements only to lines with printf , %s and bcm_errstr(rv) in them.
mysql_db_name=$(printf %s\\n "$site" | perl -lpe 'y/0-9a-zA-Z$_/_/c') Now since you know Perl well, no need for any explanations. mysql_db_name=${site//[!a-zA-Z_$0-9]/_}mysql_db_name=$(perl -se 'print y/0-9a-zA-Z$/_/cr' -- -_="$site")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46433/" ] }
375,530
I'm trying to pass a list of files with a known set of characters to sed for a find and replace. For a directory containing multiple .xml files: ls -lafile1.xmlfile2.xmlfile3.xml Each containing a matching string: grep -i foo *file1.xml <foo/>file2.xml <foo/>file3.xml <foo/> Replace foo with bar using a for loop: for f in *.xml; do ls | sed -i "s|foo|bar|g" ; done Returns: sed: no input filessed: no input filessed: no input files I already figured out an alternative that works, so this is mostly for my own edification at this point. find /dir/ -name '*.xml' -exec sed -i "s|foo|bar|g" {} \;
You have a flaw in your for loop. Remove the ls command, and add the $f variable as the argument to sed -i , which will edit each filename.xml in place: for f in *.xml; do sed -i "s|foo|bar|g" "$f"; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161646/" ] }
375,542
Currently I have ! in the right prompt of zsh as follows: export RPS1="%B%F{red}!%f%b" As I use tmux , and command history is not being synced throughout its panes (unfortunately), the numbers are almost useless for me. I tried to set RPS1 to ? and $? to display return code / error level of the command but with no success. I remember I had to set: setopt promptbang for ! to be interpolated (interpreted, expanded). How to achieve such a prompt on the right side of the command-line indicating the previous command's result in error number. An example screenshot of my current prompt having > , >> , and ! as $PS1 , $PS2 , and $RPS1 .
You have a flaw in your for loop. Remove the ls command, and add the $f variable as the argument to sed -i , which will edit each filename.xml in place: for f in *.xml; do sed -i "s|foo|bar|g" "$f"; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
375,600
I just installed Linux kernel version 4.12 on Ubuntu 17.04 using ukuu (Ubuntu Kernel Update Utility https://doc.ubuntu-fr.org/ubuntu_kernel_upgrade_utility ). The thing is, when I check the available I/O schedulers, I can't seem to find the BFQ nor the Kyber I/O scheduler : cat /sys/class/block/sda/queue/scheduler> noop deadline [cfq] So how to use one of the new schedulers in this Linux version ?
I'm not in Ubuntu, but what I did in Fedora may help you. BFQ is a blk-mq (Multi-Queue Block IO Queueing Mechanism) scheduler, so you need to enable blk-mq at boot time, edit your /etc/default/grub file and add scsi_mod.use_blk_mq=1 to your GRUB_CMDLINE_LINUX , this is my grub file, as an example: GRUB_TIMEOUT=3GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=falseGRUB_HIDDEN_TIMEOUT_QUIET=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="quiet vt.global_cursor_default=0 scsi_mod.use_blk_mq=1"GRUB_DISABLE_RECOVERY="true" After that, you must update your grub. On Fedora we have to use sudo grub2-mkconfig -o /path/to/grub.cfg , which varies depending on the boot method . On Ubuntu, you can simply run: sudo update-grub Reboot, and if you get this: cat /sys/block/sda/queue/scheduler[mq-deadline] none Probably your kernel was compiled with BFQ as a module , and this can be the case also for Kyber. sudo modprobe bfqsudo cat /sys/block/sda/queue/scheduler[mq-deadline] bfq none You can add it at boot time by adding a /etc/modules-load.d/bfq.conf file containing bfq . It is important to note that enabling blk_mq turn it impossible to use non blk_mq schedulers, so you will lose noop cfq and the non mq deadline Apparently blk_mq scheduling system is not supporting elevator flags in grub, udev rules can be used instead, with a bonus of offering a more grained control. Create /etc/udev/rules.d/60-scheduler.rules if it did not exist and add: ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="bfq" As pointed here if needed you can distinguish between rotational (HDDs) and non-rotational (SSDs) devices in udev rules using the attribute ATTR{queue/rotational} . Be aware that Paolo Valente, BFQ developer, pointed in LinuxCon Europe that BFQ can be a better choice than the noop or deadline schedulers in terms of low latency guaranties, what makes a good advice to use it for SSDs too. Paolo's comparison: https://www.youtube.com/watch?v=1cjZeaCXIyM&feature=youtu.be Save it, and reload and trigger udev rules : sudo udevadm control --reloadsudo udevadm trigger
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/375600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207157/" ] }
375,608
I'm setting up a crontab server to run several jobs to copy files from prod servers to lower environment servers. I need the cron server job to copy files from one server to another. Here is what I have. the ip's have been modified ssh -v -R localhost:50000:1.0.0.2:22 -i host1key.pem [email protected] 'rsync -e "ssh -i /home/ec2-user/host2key.pem -p 50000" -vuar /home/ec2-user/test.txt ec2-user@localhost:/home/ec2-user/test.txt' I'm using two different pem keys and users. I would think this command would work but I get this error in the debug log. Here is more to it and only show the portion that is erroring. It connects to [email protected] successfully. But errors on the 1.0.0.2 : debug1: connect_next: host 1.0.0.2 ([1.0.0.2]:22) in progress, fd=7debug1: channel 1: new [127.0.0.1]debug1: confirm forwarded-tcpipdebug1: channel 1: connected to 1.0.0.2 port 22Host key verification failed.debug1: client_input_channel_req: channel 0 rtype exit-status reply 0debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0rsync: connection unexpectedly closed (0 bytes received so far) [sender]rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]debug1: channel 0: free: client-session, nchannels 2debug1: channel 1: free: 127.0.0.1, nchannels 1Transferred: sent 5296, received 4736 bytes, in 0.9 secondsBytes per second: sent 5901.2, received 5277.2debug1: Exit status 12
I'm not in Ubuntu, but what I did in Fedora may help you. BFQ is a blk-mq (Multi-Queue Block IO Queueing Mechanism) scheduler, so you need to enable blk-mq at boot time, edit your /etc/default/grub file and add scsi_mod.use_blk_mq=1 to your GRUB_CMDLINE_LINUX , this is my grub file, as an example: GRUB_TIMEOUT=3GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=falseGRUB_HIDDEN_TIMEOUT_QUIET=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="quiet vt.global_cursor_default=0 scsi_mod.use_blk_mq=1"GRUB_DISABLE_RECOVERY="true" After that, you must update your grub. On Fedora we have to use sudo grub2-mkconfig -o /path/to/grub.cfg , which varies depending on the boot method . On Ubuntu, you can simply run: sudo update-grub Reboot, and if you get this: cat /sys/block/sda/queue/scheduler[mq-deadline] none Probably your kernel was compiled with BFQ as a module , and this can be the case also for Kyber. sudo modprobe bfqsudo cat /sys/block/sda/queue/scheduler[mq-deadline] bfq none You can add it at boot time by adding a /etc/modules-load.d/bfq.conf file containing bfq . It is important to note that enabling blk_mq turn it impossible to use non blk_mq schedulers, so you will lose noop cfq and the non mq deadline Apparently blk_mq scheduling system is not supporting elevator flags in grub, udev rules can be used instead, with a bonus of offering a more grained control. Create /etc/udev/rules.d/60-scheduler.rules if it did not exist and add: ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="bfq" As pointed here if needed you can distinguish between rotational (HDDs) and non-rotational (SSDs) devices in udev rules using the attribute ATTR{queue/rotational} . Be aware that Paolo Valente, BFQ developer, pointed in LinuxCon Europe that BFQ can be a better choice than the noop or deadline schedulers in terms of low latency guaranties, what makes a good advice to use it for SSDs too. Paolo's comparison: https://www.youtube.com/watch?v=1cjZeaCXIyM&feature=youtu.be Save it, and reload and trigger udev rules : sudo udevadm control --reloadsudo udevadm trigger
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/375608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239368/" ] }
375,645
How can I tell if a btrfs subvolume is read-only or read-write?
btrfs property will show the read-only / read-write status of a subvolume: btrfs property get -ts /path/to/subvolume This will give either: ro=true or ro=false .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
375,696
OS: Oracle Solaris 11.3.1.5.2, CPU Arch : X86 I recently installed Squid by doing $ pkg install squid This went fine: root@darwin1:~# pkg info squid Name: web/proxy/squid Summary: Squid Web Proxy Cache Description: Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. Category: Web Services/Application and Web Servers State: Installed Publisher: solaris Version: 3.5.5 Build Release: 5.11 Branch: 0.175.3.0.0.30.0Packaging Date: Fri Aug 21 17:30:06 2015 Size: 51.84 MB FMRI: pkg://solaris/web/proxy/[email protected],5.11-0.175.3.0.0.30.0:20150821T173006Z but I cannot run Squid: root@darwin1:~# /usr/squid/sbin/squid -hIllegal Instruction (core dumped) The file command gives me this: root@darwin1:~# file /usr/squid/sbin/squid/usr/squid/sbin/squid: ELF 32-bit LSB executable 80386 Version 1, dynamically linked, not stripped I'm inside a local (non-kernel) zone. It shouldn't matter, should it? Why the core dump?
Sorry, I think I've found the answer myself: http://wiki.squid-cache.org/KnowledgeBase/IllegalInstructionError . (quote begin) Illegal Instruction errors on Squid 3.4 Synopsis Squid 3.4 and later, running on certain paravirtualized systems and even some claiming full virtualization (at least KVM, Xen, and Xen derivatives are confirmed so far) crashes with an illegal instruction error soon after startup. Symptoms Squid crashes with Illegal Instruction error immediately after startup on a virtual machine on Intel-compatible processors Explanation The Squid build system uses by default the -march=native gcc option to optimize the resulting binary. Unfortunately certain (para-)virtualization systems don't support the whole instruction set they advertise. The compiler doesn't know, and generates instructions which trigger this error. Workaround These optimizations are helpful but not necessary to have a fully functional squid, especially on ia64/amd64 platforms. The detected defaults can be overridden by supplying the --disable-arch-native option to the configure script. (quote end) We're running Solaris inside VMware ESXi 6.0 . So I guess that's the reason. I won't delete my own question on the odd chance that someone else will run into this too.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22123/" ] }
375,710
need to move files from source path"u01/app/java/gids4/textfiles/output/foursight/request" to destination path " /u01/app/java/gids4/textfiles/output/archive/foursight/request"
Sorry, I think I've found the answer myself: http://wiki.squid-cache.org/KnowledgeBase/IllegalInstructionError . (quote begin) Illegal Instruction errors on Squid 3.4 Synopsis Squid 3.4 and later, running on certain paravirtualized systems and even some claiming full virtualization (at least KVM, Xen, and Xen derivatives are confirmed so far) crashes with an illegal instruction error soon after startup. Symptoms Squid crashes with Illegal Instruction error immediately after startup on a virtual machine on Intel-compatible processors Explanation The Squid build system uses by default the -march=native gcc option to optimize the resulting binary. Unfortunately certain (para-)virtualization systems don't support the whole instruction set they advertise. The compiler doesn't know, and generates instructions which trigger this error. Workaround These optimizations are helpful but not necessary to have a fully functional squid, especially on ia64/amd64 platforms. The detected defaults can be overridden by supplying the --disable-arch-native option to the configure script. (quote end) We're running Solaris inside VMware ESXi 6.0 . So I guess that's the reason. I won't delete my own question on the odd chance that someone else will run into this too.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239425/" ] }
375,743
When I use clear command to clear the screen. It is not cleared (see screenshot when I scroll up a bit after command done) So I double the command to get right behavior: $ clear && clear && DD=0 ... Why do I need to double the command to get cleared screen? UPD Actually if I just clear I got cleared screen. But I can scroll up and see last 25lines (if screen is 80x25). When I run clear;clear I got those lines cleared.
The important thing to note here is the tag on the question. This behaviour is specific to GNOME Terminal and any other terminal emulators that are built upon libvte. You won't see this in Xterm, or in Unicode RXVT, or in the terminal emulator built into the Linux kernel, or on the FreeBSD console. What happens in general is this. The clear command looks at terminfo/termcap and issues appropriate control sequences. If the terminfo/termcap entry has an E3 capability it, it first writes out that. This issues control sequences to clear the scrollback buffer. This and the history behind it are documented in detail in the Dickey ncurses manual page for the clear command . It then uses the clear capability to clear the visible screen. The control sequences in the terminfo/termcap entry are determined by the terminal type; but, with the exceptions of the (nowadays rare) terminals that use FormFeed to clear the screen (which DEC VTs and their imitators do not), they are either just plain old ECMA-48 control sequences or extensions thereto. For examples: The putty entry defines E3=\E[3J which is the Xterm extension control sequence. The NetBSD console's pcvtxx entry is one of many that define clear=\E[H\E[J or something similar. This is two ordinary ECMA-48 control sequences. The terminal emulator acts upon the control sequences. As defined by ECMA-48 and the Xterm extension to it: CSI H (CUP) homes the cursor. CSI 0 J (ED 0) or just CSI J erases from the current cursor position to the end of the screen. CSI 2 J (ED 2) erases the whole screen. CSI 3 J (ED 3) erases the scrollback buffer. When it comes to GNOME Terminal in particular: The terminal type is properly gnome , but some people leave it erroneously set to xterm . The gnome terminfo entry does not define an E3 capability, and on many systems — still! — neither does the xterm entry as this has not percolated down from Dickey terminfo . So clear just writes out the contents of the clear capability. The contents of the clear capability for those terminfo entries are the control sequences to home the cursor followed by erase the whole screen. But GNOME Terminal does not implement erase the whole screen correctly. More specifically, the library that it is based upon, libvte, does not do that in the code of its VteTerminalPrivate::seq_clear_screen() function . Rather, libvte scrolls the screen down an entire screen's worth of blank lines, and moves the cursor position to the first of those blank lines. This is why you see what you see. libvte is not erasing the whole screen when told to. Rather it is doing something that has a superficial resemblance to that, until one does exactly what the questioner has done here: scroll the terminal window back to look at the scroll back buffer. Then the difference is blatant. On other terminal emulators such as Xterm and Unicode RXVT, the ED 2 control sequence really does erase the screen, erasing every position on the screen in place, from the top down, and not altering the scrollback buffer. But on libvte terminal emulators, it just pushes the current screen up into the scrollback buffer and adds a screen's worth of blank lines. The prior screen contents are not erased but shifted into the scrollback buffer. And if you run the clear command twice, it adds two screen's worth of blank lines. If your scroll back buffer is large enough, you can still find the original screen contents, simply further up in the scrollback buffer. Further reading Control Functions for Coded Character Sets . ECMA-48. 1976. Georgi Kirilov (2007-12-30). Ctrl-L adds blank space to the scrollback buffer . GNOME bug #506438. To what extent are the xterm, xterm-color, and linux terminal emulators based on VT100? Clearing the "old" scrollback buffer Bash clear command weird behavior deletes scrollback buffer. https://superuser.com/questions/1094599/ Thomas Dickey (2018). " Known Bugs in XTerm and Look–alikes: GNOME Terminal ". XTerm Frequently Asked Questions . invisible-island.net. Thomas Dickey (2018). " Known Bugs in XTerm and Look–alikes: Notes on VTE ". XTerm Frequently Asked Questions . invisible-island.net.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129967/" ] }
375,781
In a RHEL 7.3 server, I was trying to find logged-in users. I ran w and it told me there were two users, but it only showed me the info of one (myself); then I ran who , which displayed the other user as (unknown). Finally, I ran lastlog , with which's output I could match the log in date and port from who 's output and find the unknown user actually is gdm . $ w 09:33:36 up 4 days, 15:22, 2 users, load average: 0.00, 0.01, 0.05USER TTY FROM LOGIN@ IDLE JCPU PCPU WHATmyusr pts/0 172.16.23.113 09:32 0.00s 0.06s 0.03s w$ who(unknown) :0 2017-07-01 18:13 (:0)myusr pts/0 2017-07-06 09:32 (172.16.23.113)$ lastlog Username Port From Latest...gdm :0 Sat Jul 1 18:13:23 -0500 2017... The server is a supermicro machine and from time to time I connect to it using IPMI2's kvm over lan feature. But I don't remember anything weird happening when connecting like that. This doesn't seem normal. What could have happened?
After reading Centimane's comment on /var/run/utmp and searching differently, I found this fedora forum thread , which mentioned the issue is provoked by a bug in GDM, which creates a bad entry in /var/run/utmp . Eventually I even found a bug report for it and another here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234165/" ] }
375,814
in zsh , I can autocomplete hostnames from /etc/hosts , ie: ssh f<TAB> will offer completions for hosts starting with f . This is configured in /usr/share/zsh/functions/Completion/Unix/_hosts : local ipstrip='[:blank:]#[^[:blank:]]#'zstyle -t ":completion:${curcontext}:hosts" use-ip && useip=yes[[ -n $useip ]] && ipstrip=if (( ${+commands[getent]} )); then _cache_hosts=(${(s: :)${(ps:\t:)${(f)~~"$(_call_program hosts getent hosts 2>/dev/null)"}##${~ipstrip}}})else _cache_hosts=(${(s: :)${(ps:\t:)${${(f)~~"$(</etc/hosts)"}%%\#*}##${~ipstrip}}})fi...._hosts=( "$_cache_hosts[@]" ) however, it only works if /etc/hosts file has the format 'IP' 'hostname', ie: 192.168.1.4 foo.mydomain.com it will not work if IP is missing: foo.mydomain.com How can I modify the completion script, so that hostnames without IP are also completed? Completion of hostnames without IP from /etc/hosts works fine in bash_completion . So I am just trying to get the same behavior on zsh .
I'd recommend doing this, which would use your (and the system's) ssh known hosts file instead: zstyle -e ':completion:*:(ssh|scp|sftp|rsh|rsync):hosts' hosts 'reply=(${=${${(f)"$(cat {/etc/ssh_,~/.ssh/known_}hosts(|2)(N) /dev/null)"}%%[# ]*}//,/ })' If you're still wanting to use /etc/hosts instead: strip='[:blank:]#[^[:blank:]]#'zstyle -e ':completion:*:(ssh|scp|sftp|rsh|rsync):hosts' hosts 'reply=(${(s: :)${(ps:\t:)${${(f)~~"$(</etc/hosts)"}%%\#*}##${~strip}}})' Best of luck!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155832/" ] }
375,857
We can open a website with GET parameters in the browser as follows #!/bin/bashecho 'enter username'read usernamefirefox "https://github.com/${username}" This comes in handy because I can now visit any user's github page with just a command and then entering their username. Similarly we can make a shell script to search Google with our passed query in the parameters. How do I open a website which requires POST parameters to be passed so that I can directly visit the website from the terminal? Take for example, https://www.startpage.com . If POST request passing is possible then we can directly search our query from terminal. Note: Not looking for answers based on curl to retrieve data, but answers based on firefox or any other browser to visit the website Any other way better than Selenium because user would not have control over low level data being passed in POST request like User-Agent , lang , and some other header parameters. User would be bound to only UI options if using Selenium and these low level header can't be modified according to need. xdotool would be costly because user would have to count how many times to do Tab in order to reach the particular form field, and then loop Tab that many times before typing something in there. It also doesn't give me the ability to change low level POST parameters like User-Agent , lang , etc
You create a temporary auto-submitting HTML page, point the browser to that page, and after a couple of seconds, you remove the temporary HTML file as it is no longer needed. In script form: #!/bin/bash# Create an autodeleted temporary directory.Work="$(mktemp -d)" || exit 1trap "cd / ; rm -rf '$Work'" EXIT# Create a HTML page with the POST data fields,# and have it auto-submit when the page loads.cat > "$Work/load.html" <<END<!DOCTYPE html><html> <head> <title>&hellip;</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <script type="text/javascript"> function dosubmit() { document.forms[0].submit(); } </script> </head> <body onload="dosubmit();"> <form action="https://www.startpage.com/do/asearch" method="POST" accept-charset="utf-8"> <input type="hidden" name="cat" value="web"> <input type="hidden" name="cmd" value="process_search"> <input type="hidden" name="language" value="english"> <input type="hidden" name="engine0" value="v1all"> <input type="hidden" name="query" value="&#34;Nominal Animal&#34;"> </form> </body></html>END# Load the generated file in the browser.firefox "file://$Work/load.html"# Firefox returns immediately, so we want to give it a couple# of seconds to actually read the page we generated,# before we exit (and our page vanishes).sleep 2 Let's change the above, so that we do a StartPage search on whatever string(s) are supplied on the command line: #!/bin/bash# Create an autodeleted temporary directory.Work="$(mktemp -d)" || exit 1trap "cd / ; rm -rf '$Work'" EXIT# Convert all command-line attributes to a single query,# escaping the important characters.rawAmp='&' ; escAmp='&amp;'rawLt='<' ; escLt='&lt;'rawGt='>' ; escGt='&gt;'rawQuote='"' ; escQuote='&#34;'QUERY="$*"QUERY="${QUERY//$rawAmp/$escAmp}"QUERY="${QUERY//$rawQuote/$escQuote}"QUERY="${QUERY//$rawLt/$escLt}"QUERY="${QUERY//$rawGt/$escGt}"# Create a HTML page with the POST data fields,# and have it auto-submit when the page loads.cat > "$Work/load.html" <<END<!DOCTYPE html><html> <head> <title>&hellip;</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <script type="text/javascript"> function dosubmit() { document.forms[0].submit(); } </script> </head> <body onload="dosubmit();"> <form action="https://www.startpage.com/do/asearch" method="POST" accept-charset="utf-8"> <input type="hidden" name="cat" value="web"> <input type="hidden" name="cmd" value="process_search"> <input type="hidden" name="language" value="english"> <input type="hidden" name="engine0" value="v1all"> <input type="hidden" name="query" value="$QUERY"> </form> </body></html>END# Load the generated file in the browser.firefox "file://$Work/load.html"# Firefox returns immediately, so we want to give it a couple# of seconds to actually read the page we generated,# before we exit (and our page vanishes).sleep 2 All that changes is the chunk where we use Bash string operations to replace each & with &amp; , each " with &#34; , each < with &lt; , and each > with &gt; , so that the query string can be safely written as the value attribute of the hidden input named query . (Those four suffice. It is also important to do the ampersand first, because the subsequent replacements contain ampersands. Since we emit this as the value of a hidden input, the query string is not url-encoded; it's just normal HTML content, but without doublequotes (because the value itself is in doublequotes).) The downside of auto-submitting POST requests is that you may need to update your auto-submitting HTML page every now and then, simply because the site can change the POST variable naming and internal URLs whenever they want.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224025/" ] }
375,860
Is there something similar to Vim's "Command Line Window" for Bash where I can see/edit/execute items from the history? In Vim when I press : and then Ctrl-F it opens the window that shows the entire command history: 7. Command-line window *cmdline-window* *cmdwin* *command-line-window*In the command-line window the command line can be edited just like editingtext in any window. It is a special kind of window, because you cannot leaveit in a normal way.OPEN *c_CTRL-F* *q:* *q/* *q?*[..]When the window opens it is filled with the command-line history. The lastline contains the command as typed so far. The left column will show acharacter that indicates the type of command-line being edited, see|cmdwin-char|. When you press Enter the current line is executed. (I know that I can search the history with Ctrl-R , / (vi-mode), etc.)
You have two alternatives. Either you can install hstr ( https://github.com/dvorka/hstr ) which features a suggest box with advanced search options to easily view, navigate, search, and manage your command history: Otherwise, Bash features a vi-like command line history editor. Do a set -o vi , then you can search throughout history via these keystrokes: Esc enters command mode / begins a search; type search string, then Enter to perform a search. n goes to next match, while N goes to the previous match i goes back to insert mode
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/375860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46158/" ] }
375,889
Let's say I'm running a script (e.g. in Python). In order to find out how long the program took, one would run time python script1.py Is there a command which keeps track of how much RAM was used as the script was running? In order to find how much RAM is available, one could use free , but this command doesn't fit the task above.
The time(1) command (you may need to install it -perhaps as the time package-, it should be in /usr/bin/time ) accepts many arguments, including a format string (with -f or --format ) which understands (among others) %M Maximum resident set size of the process during its lifetime, in Kbytes. %K Average total (data+stack+text) memory use of the process, in Kbytes. Don't confuse the /usr/bin/time command with the time bash builtin . You may need to type the full file path /usr/bin/time (to ask your shell to run the command not the builtin) or type command time or \time (thanks to Toby Speight & to Arrow for their comments). So you might try (RSS being the resident set size ) /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S user=%U" python script1.py You could also try /usr/bin/time --verbose python script1.py You are asking: how much RAM was used as the script was running? and this shows a misconception from your part. Application programs running on Linux (or any modern multi-process operating system) are using virtual memory , and each process (including the python process running your script) has its own virtual address space . A process don't run directly in physical RAM, but has its own virtual address space (and runs in it), and the kernel implements virtual memory by sophisticated demand-paging using lazy copy-on-write techniques and configures the MMU . The RAM is a physical device and resource used -and managed internally by the kernel- to implement virtual memory (read also about the page cache and about thrashing ). You may want to spend several days understanding more about operating systems . I recommend reading Operating Systems : Three Easy Pieces which is a freely downloadable book. The RAM is used by the entire operating system (not -directly- by individual processes) and the actual pages in RAM for a given process can vary during time (and could be somehow shared with other processes). Hence the RAM consumption of a given process is not well defined since it is constantly changing (you may want its average, or its peak value, etc...), and likewise for the size of its virtual address space. You could also use (especially if your script runs for several seconds) the top(1) utility (perhaps in some other terminal), or ps(1) or pmap(1) -maybe using watch(1) to repeat that ps or pmap command. You could even use directly /proc/ (see proc(5) ...)perhaps as watch cat /proc/$(pidof python)/status or /proc/$(pidof python)/stat or /proc/$(pidof python)/maps etc... But RAM usage (by the kernel for some process) is widely varying with time for a given process (and even its virtual address space is changing, e.g. by calls to mmap(2) and munmap used by ld-linux(8) , dlopen(3) , malloc(3) & free and many other functions needed to your Python interpreter...). You could also use strace(1) to understand the system calls done by Python for your script (so you would understand how it uses mmap & munmap and other syscalls(2) ). You might restrict strace with -e trace=%memory or -e trace=memory to get only memory (i.e. virtual address space) related system calls. BTW, the tracemalloc Python feature could be also useful. I guess that you only care about virtual memory , that is about virtual address space (but not about RAM), used by the Python interpreter to run your Python script. And that is changing during execution of the process. The RSS (or the maximal peak size of the virtual address space) could actually be more useful to know. See also LinuxAteMyRAM .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/375889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115891/" ] }
375,915
Here is my ls command, which shows the list of files in my current directory: rper: ls -l$commandoutput[0] file.txt I tried to remove $commandoutput[0] line by using rm -rf $commandoutput[0] , but it shows the following error. How can I remove it? rper: rm -rd $commandoutput[0]commandoutput: Undefined variable.
In alternative to @RomanPerekhrest's answer, this will also work: rm '$commandoutput[0]' as the single quotes will avoid variable expansion. Another way is to start typing rm $ and then hitting Tab ; the shell will autocomplete the filename, escaping characters as needed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/375915", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239007/" ] }
376,059
If I have some random processes reading from a named pipe: tail -f MYNAMEDPIPED cat MYNAMEDPIPE | someOtherProc Elsewhere, I have a handle on MYNAMEDPIPED by name.is there a safe and clean way to stop the tail process by deleting MYNAMEDPIPED or perhaps somehow "drying it up"? In other words MYNAMEDPIPED.noMoreDataIsComingThroughSoPleaseStopTailingThis() :) From one of the comments, it says to send EOF to MYNAMEDPIPE. But I cannot figure out how to do this. This shows the difficulty I am facing: http://comp.os.linux.questions.narkive.com/2AW9g5yn/sending-an-eof-to-a-named-pipe
EOF is not a character nor an "event" and could not be sent through the pipe, or "fed" to its writing end, as some stubborn urban legend suggests. The only way to generate an EOF on the reading end of a pipe/fifo (ie cause a read(2) on it to return 0) is to close all open handles to its writing end. This will happen automatically if all the processes that had opened a named pipe in write mode and all the children that have inherited the file descriptors through fork() are terminated [1]. It's not possible for a read(2) on a named pipe to return 0 it that pipe was opened in read/write mode, eg. with exec 7<>/path/to/fifo because in that case there is a single file descriptor / handle for both ends of the pipe and closing the write end will also close the read end making impossible for a read(2) to return 0 (pipes do not support any kind of half-close as sockets do with shutdown(2) ). [1] And all processes that have received the file descriptor via SCM_RIGHTS ancillary message on a unix socket. Notice that tail -f by definition won't terminate upon EOF , whether the file it's reading from is regular or special. One way to kill all processes that are holding a open handle to a file descriptor is with fuser(1) : tail -f /path/to/fifo...> /path/to/fifo # let any blocking open(2) throughfuser -TERM -k /path/to/fifo Beware that this will also kill processes that have (inadvertently) inherited an open handle to /path/to/fifo from their parents.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/376059", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
376,075
I have a simple script that I understand most of, it's the find command that's unclear. I've got a lot of documentation but it's not serving to make it much clearer. My thought is that it is working like a for-loop, the currently found file is swapped in for {} and copied to $HOME/$dir_name, but how does the search with -path and -prune -o work? It's annoying to have such specific and relevant documentation and still not know what's going on. #!/bin/bash# The files will be search on from the user's home# directory and can only be backed up to a directory# within $HOMEread -p "Which file types do you want to backup " file_suffixread -p "Which directory do you want to backup to " dir_name# The next lines creates the directory if it does not existtest -d $HOME/$dir_name || mkdir -m 700 $HOME/$dir_name# The find command will copy files that match the# search criteria ie .sh . The -path, -prune and -o# options are to exclude the backdirectory from the# backup.find $HOME -path $HOME/$dir_name -prune -o \-name "*$file_suffix" -exec cp {} $HOME/$dir_name/ \;exit 0 This is just the documentation that I know I should be able to figure this out from. -path pattern File name matches shell pattern pattern. The metacharacters do not treat / or . specially; so, for example, find . -path "./sr*sc" will print an entry for a directory called ./src/misc (if one exists). To ignore a whole directory tree, use -prune rather than checking every file in the tree. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print From Findutils manual -- Action: -exec command ; This insecure variant of the -execdir action is specified by POSIX. The main difference is that the command is executed in the directory from which find was invoked, meaning that {} is expanded to a relative path starting with the name of one of the starting directories, rather than just the basename of the matched file. While some implementations of find replace the {} only where it appears on its own in an argument, GNU find replaces {} wherever it appears. And For example, to compare each C header file in or below the current directory with the file /tmp/master: find . -name '*.h' -execdir diff -u '{}' /tmp/master ';'
-path works exactly like -name , but applies the pattern to the entire pathname of the file being examined, instead of to the last component. -prune forbids descending below the found file, in case it was a directory. Putting it all together, the command find $HOME -path $HOME/$dir_name -prune -o -name "*$file_suffix" -exec cp {} $HOME/$dir_name/ \; Starts looking for files in $HOME . If it finds a file matching $HOME/$dir_name it won't go below it ("prunes" the subdirectory). Otherwise ( -o ) if it finds a file matching *$file_suffix copies it into $HOME/$dir_name/ . The idea seems to be make a backup of some of the contents of $HOME in a subdirectory of $HOME . The parts with -prune is obviously necessary in order to avoid making backups of backups...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/376075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235424/" ] }
376,091
I alt-clicked on a window's title bar and removed the bar by clicking "No border" but now I have no accessible means to restore that window's title bar. Is there a way to open KWin for that specific window? What are my options for restoring the window's border? (Plasma 5, if it's version-specific.)
Focus window: Alt + F3 ⇒ "more actions" ⇒ uncheck "no border"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/376091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42894/" ] }
376,103
Is there any way to cat a file without interpreting the double backslash as an escape sequence? In this example a tex file is created: cat <<EOF > file.tex\\documentclass[varwidth=true,border=5pt]{standalone}\\usepackage[utf8]{inputenc}\\usepackage{amsmath}\\begin{document}$1\\end{document}EOF How can I write this so that the backslash doesn't have to be written twice each time, but $1 is still expanded with its normal value (which might contain backslashes too)?
Focus window: Alt + F3 ⇒ "more actions" ⇒ uncheck "no border"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/376103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91304/" ] }
376,140
Consider the following shell script val=($ls) The ls does not not give any shell text output. Now, how do we get output text on screen while the command is being executed? I can print the value of val to get the output, but using echo is not the point. So, using the following line is not the case echo $val So, in nutshell, how do I get the output of current command being executed in shell simultaneous as if you were executed the command by itself?
You can get the shell to echo everything it is doing, by running the following command: sh -x yourscript Or you can add this as the first command in the script: set -x It can get a bit too verbose, though. It's OK for debugging, but if you want selective output it would be best to do it yourself with carefully places echo commands.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/376140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239753/" ] }
376,149
I have a question when working with autotools, specifically when working with generating configure scripts by running autoreconf -fi I'll get these warnings: libtoolize: putting auxiliary files in '.'.libtoolize: copying file './ltmain.sh'libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac,libtoolize: and rerunning libtoolize and aclocal.libtoolize: Consider adding '-I m4' to ACLOCAL_AMFLAGS in Makefile.am.configure.ac:12: installing './compile'configure.ac:15: installing './config.guess'configure.ac:15: installing './config.sub'configure.ac:6: installing './install-sh'configure.ac:6: installing './missing'Makefile.am: installing './INSTALL'src/Makefile.am:5: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')src/Makefile.am: installing './depcomp'src/filteropt/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')src/memory/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')src/pagemanager/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')src/raster/Makefile.am:5: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')src/raster/blendSource/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS') After this I can manually go through and change INCLUDES to AM_CPPFLAGS as well as adding -I m4 but shouldn't I be able to update the configure files so that I do not get these warnings? Where would I make those edits so that I can avoid these warnings?
You can get the shell to echo everything it is doing, by running the following command: sh -x yourscript Or you can add this as the first command in the script: set -x It can get a bit too verbose, though. It's OK for debugging, but if you want selective output it would be best to do it yourself with carefully places echo commands.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/376149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140921/" ] }
376,158
For the consistency check of a backup program, I want to define a function which counts all files in a directory including all files in subdirs, subsubdirs and so on. The solution I am trying so far is as follows: countfiles() { local cdir=$1 local files=$(ls -la $cdir | grep -cv '^[dl]') local dirstring=$(ls -la $cdir | grep '^d' | egrep -o ' \.?[^[:space:].][^[:space:]]+$') local directories=(${dirstring//"\n"/}) echo ${directories[@]} for dir in ${directories[@]}; do echo -n "$dir " echo -n 'filecount >> ' local dirfiles=$(countfiles "$cdir/$dir") echo -n $dirfiles echo ' <<' #files=$(($files+$dirfiles)) done echo $files} Which gives me the following output: .config .i3 .scripts.config filecount >> gtk-3.0 termite gtk-3.0 filecount >> 2 << termite filecount >> 2 << 1 <<.i3 filecount >> 5 <<.scripts filecount >> 2 <<5 While the actualization of my $files counter is commented atm and I may need to unlocalize it, right now I set all variables as local to avoid any interference. The directory tree is as follows: /.scripts/backup_dotfiles.sh/.config/termite/config/.config/gtk-3.0/settings.ini/.i3/config/.i3/i3blocks.conf/.i3/lockicon.png/.i3/lockscreen.sh/.gtkrc-2.0/.bashrc/.zshrc/.i3/.Xresources My questions: Why does it always count the files +1 except for the master directory? Why does it count anything in the '.config' directory, as there are no files in there? How can I fix this?
You probably want to just use find . Assuming you don't have files with newlines in their names, just something like this would do: find "$dir" -type f | wc -l -type f matches regular files, but not directories, pipes, sockets or whatever. The usual output of find separates the filenames with newlines, so if any of the names contain newlines, the output will be ambiguous. With GNU find, something like this would work: find "$dir" -type f -printf . | wc -c That has find print only a dot for each file, and counts the dots. Other versions of find don't have -printf but we can use the trick of passing the input path with doubled slashes. They are treated like a single slash, but will not naturally appear otherwise in the output since file names cannot contain slashes. Then count the double-slashes in the output: find "$dir//" -type -f | grep -c // If we want to do that with purely a shell script, we can have the shelllist the filenames, no need to use ls , e.g. in Bash: #!/bin/bashfiles=0shopt -s dotglobcountfiles() { local f; for f in * ; do if [ -f "$f" ] ; then # count regular files files=$((files + 1)) elif [ -d "$f" ] ; then # recurse into directories cd "$f" countfiles cd .. fi done}cd "$1"countfilesecho $files
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/376158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229880/" ] }
377,191
I would like to know the kill stages. We always use kill -9 and not any other numbers. Can anyone explain the reason.
Originally, the kill command/system call just killed a process. This was done by the kernel and the process just disappeared, never being notified about it.That stopped around Third Edition, I think.kill -9 says to send signal number 9 to a process. Unlike most (all? it depends) other signals, it can't be 'caught' by a process and handled in any way. A kinder way to stop a process is kill -15 (or kill -TERM) which tells the process it is being terminated, but gives it a chance to perform cleanup. Use of kill -9 is a 'guaranteed' way to kill a process; if it's stuck, kill -15 might not always work. Hence, many people still use kill -9 as a 'first resort'. The reason the 'ultimate' kill signal is number 9 is just the way they did it. There were at least another eight different signals at that time, and I guess the numbers were assigned by the person who programmed that part of the kernel (probably Ken Thompson). Some of the lower numbers are now largely historical, as they map onto hardware instructions and/or events in the PDP-11 hardware. And there are also many others above 9. Note that the actual numbers have no levels or hierarchy in them; in no sense is signal 8 'less' than signal 9 or 'greater' than signal 7.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228031/" ] }
377,303
I want to name file according to parity of a day of a week.In the terminal the following works: $(($(date +\%u)%2)) But this doesn't work in cron (I suspect evaluating of mathematical expressions doesn't work). How can I make this working in cron?
You escaped one percent sign and not the other: $(($(date +\%u)%2)) ^ HERE All percent signs in a crontab entry need to be escaped, because % has special meaning there. To quote from the crontab(5) manpage: The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab file. Percent-signs (%) in the command,unless escaped with backslash (), will be changed into newline characters, and all data after the first % will be sent to the command as standard input. Admittedly, that paragraph could be worded better. So that needs to be: $(($(date +\%u)\%2))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117315/" ] }
377,359
I have several thousand pages of scanned book pages. Each page is saved individually as a JPG. The writing is clear, but fonts vary, and the pages do include pictures and illustrations. I need to create a list of all of the words appearing in each JPG file. Is there a command line tool for scanning an image listing the words that appear? It does not need to have perfect scanning, just an estimate.
Install imagemagick , pdftotext (found in a package named poppler-utils within some package managers) and ocrmypdf . The latter is a fast (ocr takes a lot of cpu, and it is configured to use all your cores), open-source and frequently updated piece of OCR software. This approach is possibly overkill as it actually tries to assign a string to each word instead of just labeling a word, but I've had a lot of trouble finding good and easy to use opensource OCR software in general. Then, in the directory where you have saved all your JPGs: $ convert *.jpg pictures.pdf$ ocrmypdf pictures.pdf scanned.pdf$ pdftotext scanned.pdf scanned.txt$ wc -w scanned.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13099/" ] }
377,373
I updated archlinux with "pacman -Syu" and then when I've restart, the system can't start. This is the report: Warning: /lib/modules/4.11.9-1-ARCH/modules.devname not found - ignoringversion 232Error: device 'UUID=b5a9a977-e9a7-4d3d-96a9-dcf9c3a9010d' not found. Skipping fsck.Error: can't find UUID=b5a9a977-e9a7-4d3d-96a9-dcf9c3a9010d You are now being dropped into a emergency shell.Can't access tty: job control turned off In that shell my keyboard doesn't work. I'm trying with a livecd of archlinux: mounting the partitions and using chroot.I check the uuid of the root partition in "/etc/fstab". It's my fstab: # /dev/sda2 UUID=b5a9a977-e9a7-4d3d-96a9-dcf8c3a9010d / ext4 rw,relatime,data=ordered 0 1 # /dev/sda1 UUID=FBA9-977B /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2 # /dev/sda4 UUID=a43b8426-c93a-4f32-99c8-9dd5cf645373 /home ext4 rw,relatime,data=ordered 0 2 # /dev/sda3 UUID=9eec735e-3157-4e0e-a5c6-ef3a7c674201 none swap defaults 0 And it's the result of "lsblk -f" NAME FSTYPE LABEL UUID MOUNTPOINTloop0 squashfs /run/archiso/sfs/airootfssda ├─sda1 vfat FBA9-977B ├─sda2 ext4 b5a9a977-e9a7-4d3d-96a9-dcf8c3a9010d /mnt├─sda3 swap 9eec735e-3157-4e0e-a5c6-ef3a7c674201 └─sda4 ext4 a43b8426-c93a-4f32-99c8-9dd5cf645373 /mnt/home I've updated the system again with "pacman -Syu" and I tried to make "mkinitcpio -p linux", but it haven't solved the problem (in spite of the result of the command it's ok). This is the report: ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'default' -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img==> Starting build: 4.11.9-1-ARCH -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [block] -> Running build hook: [block]WARNING: Possubly missing firmware for module: aic94xxWARNING: Possubly missing firmware for module: wd719x -> Running build hook: [autodetect] -> Running build hook: [modconf] -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck]==> Generating module dependencies==> Creating gzip-compressed initcpio image: /boot/initramfs-linux.img==> Image generation successful==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'fallback' -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect==> Starting build: 4.11.9-1-ARCH -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [block]WARNING: Possubly missing firmware for module: aic94xxWARNING: Possubly missing firmware for module: wd719x -> Running build hook: [modconf] -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck]==> Generating module dependencies==> Creating gzip-compressed initcpio image: /boot/initramfs-linux-fallback.img==> Image generation successful I tried to change the order of HOOKS in "/etc/mkinitcpio.conf". But it doesn't work. This is the current order: base udev block autodetect modconf filesystems keyboard fsck "uname -r" returns: 4.11.7-1-ARCH "pacman -Q linux" returns: linux 4.11.9-1 The file of warrning "/lib/modules/4.11.9-1-ARCH/modules.devnam" exists. I tried to install and use "linux-lts" but the result it's the same.I use grub and I tried to reconfigure it too. What can I do?
I just forgot mount boot (thank you, jasonwryan). The solution to this problem, in my case was: Use a livecd to mount all partitions and use chroot. Update: pacman -Syu Regenerate initramfs using: mkinitcpio -p linux If you use grub: grub-mkconfig -o /mnt/boot/grub/grub.cfg Restart.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139976/" ] }