output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
reattach-to-user-namespace should not be necessary anymore, you can try without it entirely. There is little tmux could be doing to make copying a small amount of text take a couple of seconds so probably this is pbcopy. If your terminal supports OSC 52 (iTerm2 does, I'm not certain about alacritty), you could use it instead of pbcopy - see https://github.com/tmux/tmux/wiki/Clipboard. If you are not using OSC 52, make sure to turn set-clipboard off.
I am having this issue for about 2 years now (since I started to use tmux in Summer 2018) and I have not found a solution to this problem. Any help is greatly appreciated! The copy functionality of tmux is slow. Often times when I copy using mouse selection, and then I paste in the terminal it almost always pastes what I thought was in the clipboard previously, it never copied the current mouse over selection into the clipboard. I have to select and then wait for a bit (usually a second or so). I am so used to the copy current mouse selection from linux, iTerm2 on macOS and they are blazingly fast, I never have to think about or wait for the selection to be copied. This causes problems because sometimes my clipboard has sentences and when I mouse over let's say a dir path/filename/git commit hash, etc. and paste in the terminal thinking that a dir path/filename/git commit hash will be pasted, here comes a barrage of sentences. I have tried a few ways to address this problem none of them to my satisfaction. I tried to print a message after the copy was done: bind -T copy-mode-vi Tab send -X copy-pipe "reattach-to-user-namespace pbcopy"\; display-message "copied to system clipboard"I saw the mistake I was doing with calling reattach-to-use-namespace every time on selection, so instead, I started tmux only once with this program: set-option -g default-command "reattach-to-user-namespace -l bash" bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe "pbcopy" \; display-message "copied to system clipboard"Later I switched to tmux-plugins/tmux-sensible which starts tmux with reattach-to-user-namespace. And now a few weeks back I disabled all the bind-key copy configuration and switched to tmux-plugins/tmux-yank. I prefer the tmux-yank configuration better than what I had. But the slowness or delay in copying still persists, I am often pasting the wrong information, thinking the mouse selection is copied into the system clipboard.I also running updated software as much as possible, hoping to see some improvements. The issue is I do not where the problem is to debug further. Is the slowness in tmux or reattach-to-user-namespace or Alacritty or OS X. I tried switching back to iTerm2 because this issue was getting annoying but after using Alacritty, iTerm2 is sluggish and slow, as I am streaming a lot logs in the terminal for debugging. And unfortunately Alacritty does not support multi-window mode or tabs at the moment. Alacritty+tmux on macOS is the perfect solution at the moment for me, except for this once annoyance with the slowness in copy. I am running:tmux 3.1b reattach-to-user-namespace 2.8 alacritty 0.4.2 (f68de37) macOS Catalina 10.15.4I ran into this https://github.com/alacritty/copypasta, I want to give this a try and see if this improves the situation. And just for writing this question, I pasted the wrong info many times and had to go back and wait for the copy to properly finish.
tmux copy is slow on macOS
You wrote: tmux new-session -d -s mail -n mutt mutt tmux new-window -d -t mail -n offlineimap "offlineimap -c ~/.dotfiles/mutt/dev.offimap" tmux split-window -d -v -t mail "offlineimap -c ~/.dotfiles/mutt/trash.offimap"Try this instead, write this into a tmux_mail.sh or whatever you wish to name it: #!/bin/bashtmux \ -v \ new-session \ -d \ -s mail \ -n mutt \ mutt \ \; \ new-window \ -d \ -n offlineimap \ "offlineimap -c ~/.dotfiles/mutt/dev.offimap" \ \; \ split-window \ -d \ "offlineimap -c ~/.dotfiles/mutt/trash.offimap" \ \; \ attachOnce saved and made executable, run it on your command prompt to enjoy your new tmux mail session: $ ./tmux_mail.shExplanation\; to make each new-window or split-window arguments part of a single continuous tmux command so we omit -t as you don't need -t to specify target session when all this is specified as part of one single tmux command every line except the last uses bash line continuation \ (a backslash followed by a space, followed by a line return) to continue the lines of the command we want bash to run, while allowing us to write the command fully yet with nearly one option per line, for easier reading and clarity which may help you maintain this script in the long run -v for verbose logging needs to be specified in the beginning, per the syntax synopsis in man tmux the -d is to detach so it does not carry out the default behavior of attaching to current terminal right away, not until we have a chance to fully specify the options we want thus in the end when done with all our options that we want tmux to do, we have to attach to actually see the final result on our terminal note that if a program is non-interactive, just runs and exits such as echo, then that will similarly happen in tmux, it would just run the program and then close the window when the program exitsSo as long as your mutt and offlineimap commands actually work and don't quit right away normally, and work fine outside of tmux, then with these commands they should work in tmux, creating the tmux session you wanted.
I want a new session with two windows named mutt and offlineimap. the offlineimap window should be split into two using two different offlineimap instances (with different profiles). tmux new-session -d -s mail -n mutt mutt tmux new-window -d -t mail -n offlineimap "offlineimap -c ~/.dotfiles/mutt/dev.offimap" tmux split-window -d -v -t mail "offlineimap -c ~/.dotfiles/mutt/trash.offimap"just creates an mutt window, but no offlineimap windows. => What is wrong with that code? Edit: When I change the content of the "" to sleep 1000 it works. EDIT2: I've found out the second tmux window exits instantaneously after its creation, because it "uses the same metadata as the first offlineimap"
tmux: creating new split window doesn't work
Multiplexers provide additonal functionality that may be of benefit to your workflow; this includes:configurable scrollback (ie., number of lines to retain) searchable scrollback cut/copy and paste between clients (windows) persistence across sessions status line for system informationThe most significant benefit is persistence. Start a job in a multiplexer session and you can log off knowing that when you log back in and reattach the job will still be running.
What are good reasons to use linux terminal multiplexers such as screen and tmux instead of physical terminal?
Advantages of using linux terminal multiplexers
From the manual:new-session [-AdDEPX] [-c start-directory] [-e environment] [-f flags] [-F format] [-n window-name] [-s session-name] [-t group-name] [-x width] [-y height] [shell-command] (alias: new) Create a new session with name session-name. The new session is attached to the current terminal unless -d is given. window-name and shell-command are the name of and shell command to execute in the initial window.You can replace the screen command with (naming the session for easy management as suggested by @cas): tmux new-session -d -s torrent qbittorrent-nox It will create a detached session named torrent (rather than a number) with qbittorrent-nox running inside. When the command ends, so will the session.You can then attach to this specific session (in case there are multiple created) with: tmux attach-session -t torrentOr if already having a tmux client attached to an other session, you can swith to this one instead with: tmux switch-client -t torrent(or simply navigate with ctrlb and s or (, )).
How can I get tmux to execute a command and then detach (That is, automatically in a single line without further input beyond initially starting the session)? I don't know to write sophisticated scripts. But I used to do this in screen with a command like: screen -S torrent -d -m qbittorrent-nox But I have been reading that tmux is better and I would like to switch. But I couldn't figure out how to achieve what the above command does.
How to execute a command in tmux and detach using a single line command?
In the end, I gave up solving the problem from Linux, and tried it from Windows. That turned out to be a good idea: running chkdsk solved the problem.
I'm trying to recover data from a Windows (NTFS) disk that became unreadable for some reason. I'm doing this from within a Linux Mint environment. I can see the partitions normally, and I also see the correct free space of the disk. However, mounting it gives problems: > sudo ntfs-3g -o force,rw /dev/sdc1 /media/windows2 ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 33488 usa_count: 34671: Invalid argument Actual VCN (0x8000020b5080000) of index buffer is different from expected VCN (0x0). ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 33488 usa_count: 34671: Invalid argument Actual VCN (0x8000020b5080000) of index buffer is different from expected VCN (0x0). ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 33488 usa_count: 34671: Invalid argument Actual VCN (0x8000020b5080000) of index buffer is different from expected VCN (0x0).> ls /media/windows2 ls: cannot access /media/windows2: Input/output errorI already tried running testdisk: Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63 Partition Start End Size in sectors >* HPFS - NTFS 0 32 33 25 126 37 407552 P HPFS - NTFS 25 126 38 60797 59 62 976297984If I choose list files, I see a couple of files/directories but far from all. If I go to undelete, I see some files, mainly temporary internet files, which likely indeed were removed. I don't seem to have a way to see the remaining actual files. I already tried to rebuild the boot sector and check the MFT in checkdisk, but this didn't help. I also tried running ntfsfix: > sudo ntfsfix /dev/sdc1 Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. Checking the alternate boot sector... OK NTFS volume version is 3.1. NTFS partition /dev/sdc1 was processed successfully.There seems to be nothing physically wrong with the disk: > sudo badblocks /dev/sdc -sv Checking blocks 0 to 488386582 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors)Is there anything else I could try to get access to the files?
Recovering data from broken (NTFS) partition
Take a look at the e2fsprogs package. It seems that you can get all your backup superblocks from dumpe2fs /dev/sd<partition-id> | grep -i superblock and then have e2fsck check the FS for you, or just try to do mount -o sb=<output-of-dumpe2fs> /dev/sd<partition-id> /your/mountpoint with a backup superblock. See this for reference: http://www.cyberciti.biz/faq/linux-find-alternative-superblocks/. testdisk works well to recover partition tables, not clobbered file systems. Photorec is a last resort when you have really messed things up and can't get any of the filesystem structure recovered.
When I was installing Mint Debian edition unlike the classic edition, the installation automatically formated my home partition when I did not specify to format. So the formatting previously was ext4 as is now. I believe the data is still there as it was a quick format. I have now booted the computer up on a live USB to prevent writing on it. Ran testDisk. Is there anyway to recover to a previous superblock so i can recover my data?
Data recovery from an accidental format on ext4 partition
You can't fix LVM by growing size back to original size, unless you were very lucky and the LV had no fragmentation whatsoever due to previous resizes. Chances are the new LV will have the first 20G or so of your original filesystem but the remaining 780G (or whatever) are scrambled eggs (wrong data, wrong offset, wrong order). And that's assuming you're using HDD media. If it was SSD, with issue_discards=1 in your lvm.conf, the data would simply be gone, which is why I never use this option. You have to check /etc/lvm/{archive,backup}/ for old versions of your metadata. Each file in there says when it was created, for example: description = "Created *before* executing 'lvremove HDD/mdtest1'"You're looking for the one that says Created before lvresize 850 with the G missing. And then vgcfgrestore LVM metadata using that backup and hopefully then it will be back in working order. If you do not have such files in /etc/lvm, either because you did this from a Live CD that lost this data, or the damage happened on your root LV, things get a bit more complicated as you have to hope for the LVM metadata on disk to contain this bit of history in its circular buffer. Rough method to see what's possibly in there: dd if=/dev/pvdevice bs=1M count=1 | strings -w -n 16
I was trying to resize my LUKS crypt following this https://wiki.archlinux.org/index.php/Resizing_LVM-on-LUKS and I got to the partition resize with parted and seriously screwed things up. I typed 870 as the new size and forgot to put a G on the end. It shrunk my partition down to 870M I immediately resized it to 870G but by then the damage was done. Luckily I could still decrypt the LUKS crypt but I couldn't get my Logical Volume to even have a device file on the system. LVM recognized the volume as existing and showed the device file it was attached to but the file didn't exist and it showed it as having no filesystem. I did vgscan --mknodes and it successfully generated the device file but testdisk still wouldn't show it. I recreated the volume and put a new ext4 filesystem on it and now testdisk will show the drive but scanning yields nothing. I get a whole bunch of ext4 entries but all of them either say Can't open filesystem or No files found. Is there anyway for me to recover the filesystem that was on the disk? I don't want to write any data to it until I get what's on it off of it unless that's not possible. EDIT: After poking around the real thing I need help with is recovering files from a previous ext4 filesystem. My drive had an ext4 system on it and that has since been overwritten with a new one however all the data from the old system still exists as shown by sudo dd if=/dev/Storage/Storage bs=1M | strings -fn 16. The only thing I did after my screw up was put a new ext4 FS on and nothing else so most of my data is probably still intact. I need to recover that data. pvdisplay shows the following --- Physical volume --- PV Name /dev/mapper/Storage VG Name Storage PV Size 931.51 GiB / not usable 3.68 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID CAueGx-Glzx-zCd0-H00m-R8d5-KTRc-9Ff7ay--- Physical volume --- PV Name /dev/mapper/sda3_crypt VG Name mint-vg PV Size 118.50 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 30336 Free PE 10 Allocated PE 30326 PV UUID UJJfu8-S2Ac-pEZl-PlPa-uUzJ-axEs-ckbDWGMy backup shows # Generated by LVM2 version 2.02.98(2) (2012-10-15): Thu Aug 13 20:45:52 2015contents = "Text Format Volume Group" version = 1description = "Created *before* executing '/sbin/lvreduce --config log{command_names=0} -f -l 217600 /dev/Storage/Storage'"creation_host = "desktop" # Linux desktop 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 creation_time = 1439523952 # Thu Aug 13 20:45:52 2015Storage { id = "lM3S9T-inH1-mKsq-5doN-H8hT-zO3F-LF9jDx" seqno = 2 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 256 max_pv = 256 metadata_copies = 0 physical_volumes { pv0 { id = "nH1Axo-5nBo-WcyA-Xc4E-KwRt-K0Ib-ScK8Ch" device = "/dev/mapper/Storage" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953520999 # 931.511 Gigabytes pe_start = 2048 pe_count = 238466 # 931.508 Gigabytes } } logical_volumes { Storage { id = "Qb01kz-y1RG-PVQp-cGjB-sj77-xgnJ-w9kn3n" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "desktop" creation_time = 1436247513 # 2015-07-06 22:38:33 -0700 segment_count = 1 segment1 { start_extent = 0 extent_count = 238466 # 931.508 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } } }
LVM Filesystem recovery
Intel MacBook Pro's use GUID partition table for disk partitioning which one can edit with gdisk. A copy of the partition table metadata is usually stored as backup of the primary. You can read more detail about it on the Arch Wiki here You need first to know the device name, /dev/sda, /dev/sdb of your Mac disk. You can use the blkid, lsblk or dmesg commands to determine the exact name if you do not know. From your Ubuntu Live USB, use gdisk /dev/sdX (where X is your device name) e.g. /dev/sdb (assuming the USB is given sda and the Mac is the only other disk) $ sudo gdisk /dev/sdb GPT fdisk (gdisk) version 0.8.10Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT.Command (? for help): `r`Recovery/transformation command (? for help): `b` Recovery/transformation command (? for help): `w`And reboot The b restores the backup GUID partition table and w writes it to disk You way wish to also run the v verify after restoring the backup table and before writing it to the disk. gdisk is very sane about letting one create and modify the partition table, in memory, and nothing is written on disk until you write it with the w command. Be very sure you are happy with the recovery result before you write it to disk.
I booted to a live Ubuntu USB, wanting to delete partitions on an SD card, and ended up deleting all my partitions on my MacBook Pro's internal HD. I have already tried gpart (which gets stuck at "Begin scan...") and testdisk (which can't create partitions since "Function write_part_mac not implemented"!). I did try cmd+ R at boot, but gave it up as it was taking a long time. I thought there would be an easier and faster alternative. Is there a way to recover data with partitions on my hd? Here's my testdisk output for the Mac HD: Partition Start End Size in sectors > P DOS_FAT_32 40 409639 409600 [EFI]> P HFS 409640 489972567 489562928
Recover partitions Mac OS internal HD
The good news is that the knoppix iso is relatively small compared to your drive size so even with the 1-2 minutes (an eternity in computer terms) only about 1% of your drive is overwritten, and most of your data is not wiped. If it were wiped, then even trying recovery would be almost impossible without huge cost. Before unmounting the HDD, the system still had the original disk partition information in memory, as well as the filestructure. Since you did not go through the normal filesystem driver code, it had not noticed the underlying disc blocks had changed, and everything looked ok (but wasn't). If you had not yet, you could have written down the output of fdisk -l, because that is a good way to start trying to restore the partitioning information. Once that is there you might have a chance to restore the filesystem(s) and try to use any redundant information (depending on the filesystem) to restore things. Since you indicate connecting the drive to a Windows 7 machine, you probably had a VFAT or NTFS on the drive, which AFAIK have less recovery information than some of the Linux native filesystems. PhotoRec will try to find files by combining elements of files that fit together. Depending on the fragmentation of the file, that can be a lenghty process, so trying to recover the partitioning and then the filesystem should have priority. The first thing to do (if you don't alreayd have done that) is buy a 2 TB drive and make a copy of the 1Tb drive on there, so you can restore the 1TB drive restore to its current state when necessary (a new drive might be an investment but it will give you a backup device, irrespective of whether you get your data back). Then try to restore the partition information. If you still know how you partitioned it, you can just try to do the same commands, no formatting of new partitions at any time!. Don't do so before you have made a backup copy of the whole drive! If there were multiple partitions on the drive, then there is a good chance that all but the first can be complete recovered, once the partition information is restored. Then try to recover the filesystem (tools depend on the type), if it is VFAT or NTFS, it might be that you better look for Windows based tools for that. In parallel you can try and run PhotoRec on the backup copy of the disc, as long as you don't change the actual backup copy, just pull any reconstructed files from there and copy them to a new location, outside of the backup. It is difficult to give you very concrete steps to take as much depends on the actual partitioning and filesystem types, sorry for that. I hope this helps you on the way to get some of your data restored. (Did I mention to make a backup of your disc before trying any changes?)
I wanted to make a bootable USB stick from a Knoppix.iso on GNU/Linux with dd. Situation:Laptop with Linux Mint on it HDD mounted to system (.iso file on it) 1TB HDD (800 GB used) USB stick mounted on system (to make bootable) command used: $ sudo dd if=/HDD/knoppix.iso of=/dev/sdbWhat happened: I accidently typed in the wrong path into the command. Instead of writing to the USB I copied the knoppix.iso onto the HDD. As I noticed this, I immediately stopped the process. I reviewed the file structure of the HDD and everything was at its place, so I unmounted the HDD. At home I attached the HDD to my PC with Win7 Professional. I found out that my data was gone and the unpacked knoppix.iso on the HDD. Since then I didn't attach the HDD to any kind of computer any more. I tried to find a solution to resolve this problem, but I want to be sure to not mess something up since I have no experience here. I also didn't make a backup. What I want to do: I found PhotoRec and I want to try recovering my data with it. Question: Does this program help me reaching my goal here? And do you have recommendations on how to use it? Do you know any other programs which could be helpful here?
Recover data from HDD
Although the official documentation states that "photorec ignores the filesystem", even if it was true (it's not), the partition could still play a role, moreso if it's not 4K-aligned. Consider "How PhotoRec works"PhotoRec first tries to find the data block (or cluster) size. If the file system is not corrupted, this value can be read from the superblock (ext2/ext3/ext4) or volume boot record (FAT, NTFS). Otherwise, PhotoRec reads the media, sector by sector, searching for the first ten files, from which it calculates the block/cluster size from their locations. Once this block size is known, PhotoRec reads the media block by block (or cluster by cluster).So, not only does it not ignore the filesystem, it even makes assumptions if there is no filesystem. And all further actions then depend on these assumptions. It even asks you for the filesystem first thing in the process: To recover lost files, PhotoRec need to know the filesystem type where the file were stored: >[ ext2/ext3 ] ext2/ext3/ext4 filesystem [ Other ] FAT/NTFS/HFS+/ReiserFS/...As long as you set PhotoRec to analyze a single partition that was a single filesystem, and you didn't overwrite it with another filesystem (like when dd'ing ISO to the wrong device), all is (probably) well. I guess except if the first 10 files were randomly aligned to something that won't be true anymore later. But as soon as there is a new partition, new filesystem, which might have a different alignment, a different block (cluster) size, a different layout... there is a chance that PhotoRec will read through it all, but recognize only a fraction of what is there. So yes, it makes sense to run the command on (arbitrary) segments of disk. Either that, or disable PhotoRec's assumption-making. Perhaps this could be done using the expert mode but I have not experimented with this mode in detail:The expert mode option allows the user to force the file system block size and the offset. Each filesystem has his own block size (a multiple of the sector size) and offset (0 for NTFS, exFAT, ext2/3/4), these value are fixed when the filesystem has been created/formated. When working on the whole disk (ie. original partitions are lost) or a reformated partition, ★if PhotoRec has found very few files★, you may want to try the minimal value that PhotoRec let you select (it's the sector size) for the block size (0 will be used for the offset).(★) So it turns out photorec being unable to find all files when not running it per-partition is even an officially documented feature. :-)
When recovering data from a formatted external hard disk with photorec command of the testdisk tool, if originally the hard disk had 2 partitions (NTFS and ext), does it make sense to run the command twice for each partition-filesystem type?
Is it convenient to run testdisk photorec command once for each original filesystem?
The problem in your command is the !(*.doc|.docx) construct. The * expands to every file and directory. You have to cease any further writes to the file system because when files are removed and unlinked (no remaining hard links to them), the file system free the blocks previously allocated for the deleted file, these blocks are allocated to new files and their contents overwritten. So to recover your files, you can use the photorec command (sudo apt-get install testdisk for debian). It will open a text based window so follow its instructions. The files will be recovered with different names generated by system.
So I wanted to clean a directory and subdirectories from non .doc|.docx files and I ran the following command: rm -rf /home/user/dir/dir1/dir11/ !(*.doc|*.docx)and I just deleted my home directory (hundreds of thousands of files...). My last save is not so recent. It's not the end of the world but it would help a lot if I could recover it. So my questions are: 1) why did it go wrong? 2) can I recover the deleted directories and files? Obviously they're not in the Trash. Would testdisk or photorec help? Thanks for the help!
I deleted my home directory... can I recover it? [duplicate]
Finally, I was able to reproduce the output of TestDisk on the second drive. I simply changed the geometry settings to the ones of the original drive. I did the following: Lookup the geometry settings of the original drive: $ sudo sfdisk -g /dev/sda /dev/sda: 969021 cyclinders, 16 heads, 63 sectors/trackOpen TestDisk, select the "test disc" and apply the above settings in the Geometry menu. Do not be afraid - the setting are not permanent. Then choose Quick analyze and the result should show the same as it did for the original drive. This time it recognized the desired Linux partitions like the original drive. This time I could savely write the partition table since it only applied the "test disc".Further reading, very helpful: How to mount an encryped home partition?
I am trying to recover partitioning information using TestDisk. To try things out I copied all data from the original harddisc to another physical drive using dd as follows: $ dd if=/dev/sda of=/dev/sdb bs=4096 conv=notrunc,noerror,syncRunning TestDisk on the original drive shows up like this: Disc /dev/sda - 500 GB / 465 GiB - CHS 969021 16 63Warning: the current number of heads per cylinder is 16 but the correct value may be 255 You can use the Geometry menu to change this value. It's something to try if - some partitions are not found by TestDisk - or the partition table can not be written because partitions overlaps.And after continuing without changing anything "Quick analyze" outputs: Disk /dev/sda - 500 GB / 465 GiB - CHS 969021 16 63 Partition Start End Size in sectors * HPFS - NTFS 2 0 33 205 3 19 204800 [System-reserviert] P HPFS - NTFS 205 3 20 203174 9 41 204593152 [W764] P Linux 203176 10 11 305359 3 51 103000064 L Linux 313112 6 7 373707 2 50 61079552 HPFS - NTFS 373707 2 51 969022 15 47 600078336 [DATA] HPFS - NTFS 373707 2 58 969020 15 63 600076329Running TestDisk "Quick analyze" on the test drive show up like this: Disk /dev/sdb - 500 GB / 465 GiB - CHS 969021 255 63 Partition Start End Size in sectors HPFS - NTFS 0 32 33 12 223 19 204800 [System-reserviert] HPFS - NTFS 12 223 20 12748 53 41 204593152 [W764] Linux 12748 86 11 19159 202 51 103000064 Linux 19646 68 7 23448 74 50 61079552 HPFS - NTFS 23448 74 51 60801 112 47 600078336 [DATA]The discs are not by the same brand. The test drive is a SSD. It seems as if the partitioning information is not copied to the test drive. Why?The output of sfdisk is the same for both discs. Here for /dev/sda: $ sudo sfdisk -d /dev/sda # partition table of /dev/sda unit: sectors/dev/sda1 : start=376696782, size=600076386, Id= 7, bootable /dev/sda2 : start= 0, size= 0, Id= 0, bootable /dev/sda3 : start= 0, size= 0, Id= 0, bootable /dev/sda4 : start= 0, size= 0, Id= 0, bootableHere is the output of parted. The disc model differs - the other data is the same: $ sudo parted /dev/sda print Model: Disk Name (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdosNumber Start End Size Type File systems Flags 1 193GB 500GB 307GB primary ntfs bootThe geometry (as seen by the kernel) for the S-ATA disc ... $ sudo sfdisk -g /dev/sda /dev/sda: 969021 cyclinders, 16 heads, 63 sectors/track... and for the SSD disc $ sudo sfdisk -g /dev/sdb /dev/sdb: 60801 cyclinders, 255 heads, 63 sectors/track
How to copy partioning information with dd?
Okay, sorry for answering my own question so soon, but I noticed something flabbergasting. The .qcow2 file was of size 120400379904 Bytes, whereas the conversion of the image with qemu-img convert -O raw gave me an image of size 128849018880 Bytes. Quite a difference. Now, if we take the size in sectors found by testdisk, we will indeed notice that 512*251657216 is 128848494592, which happens to be 512 Bytes more than the file size of the "raw" image. That looks promising, I thought to myself. I generated these files a few years ago, so I am not sure whether I created them as sparse images. Nevertheless, if qemu-img info shows it that way, I thought to myself, let's try to convert the image format. Keep in mind that this doesn't change the original file! qemu-img convert -O raw input outputdoes that job, albeit slowly. Running testdisk again on that file worked surprisingly well, although I was still unable to convince mount to use a different superblock, despite -o sb=.... TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[emailprotected]> http://www.cgsecurity.orgDisk bigdata/vm_disk_vdb.img - 128 GB / 120 GiB - CHS 15666 255 63 Partition Start End Size in sectors >P ext3 0 1 1 15664 239 62 251657216 [DATA]Structure: Ok.Keys T: change type, P: list files, Enter: to continue ext3 blocksize=4096 Large file Sparse superblock, 128 GB / 119 GiBAfter that, I could get testdisk to copy the files into a directory and diff it against my backups. There were a few corruptions, such: ext2fs_read_inode(ino=384492884) failed with error 2133571369.and also other minor issues, but the problems were affecting only about 0.1% of all files and folders. Start testdisk as follows to be able to figure out which files must be considered damaged: testdisk /log imagefile.img
I have an interesting case, where e2fsck refuses to recognize the file system inside a qcow2 image file. Using testdisk I am able to see the partition, so some markers would be left. The reason this problem occurred in the first place was because the host of the virtual machine died. So I choose None as the "type" of partition and get the following. TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[emailprotected]> http://www.cgsecurity.orgDisk /dev/loop0 - 120 GB / 112 GiB - 235156929 sectorsThe harddisk (120 GB / 112 GiB) seems too small! (< 4079258 TB / 3710063 TiB) Check the harddisk size: HD jumpers settings, BIOS detection...The following partitions can't be recovered: Partition Start End Size in sectors > ext3 640 251657855 251657216 [DATA] ext3 1864062 253521277 251657216 [DATA] ext3 1864064 253521279 251657216 [DATA] ext3 2387454 254044669 251657216 [DATA] ext3 2387456 254044671 251657216 [DATA] ext3 2911614 254568829 251657216 [DATA] ext3 2911616 254568831 251657216 [DATA] ext3 3435774 255092989 251657216 [DATA] ext3 3435776 255092991 251657216 [DATA] ext3 3959934 255617149 251657216 [DATA][ Continue ] ext3 blocksize=4096 Large file Sparse superblock, 128 GB / 119 GiBIt seems superblocks still exist and are intact, but how can I convince mount to use one of those superblocks as long as I don't know where they are located? kpartx doesn't see anything on /dev/loop0 after I did the usual losetup -o 32256 /dev/loop0 imagefile for qcow2. The image itself is (qemu-img info): file format: qcow2 virtual size: 120G (128849018880 bytes) disk size: 112G cluster_size: 65536 Format specific information: compat: 0.10NB: I do have backups, but they are a few weeks old and if at all possible, I'd diff the stuff on the disk against the backups. Most are Git and Mercurial repos, so it's possible to fetch them again from elsewhere.
How to find alternative superblocks in ext3 file system of partition-less qcow2?
photorec can't recover files by date, but it does recover the metadata of a file if found with it (found it here in the section recovery completed). So after recovering all your data, sort your files by date to find the last ones. In the link they said that photorec will recover the metadata if found. So he might not find them all.
I just accidentally erased my home directory using the rm command. I am being pointed towards photorec and testdisk to recover the files. However, I have a backup, about 2 or 3 weeks old I think, and I was wondering if either photorec or testdisk are able to recover files by dates (last modification) since I am reading it may not be worth the trouble otherwise? It doesn't look like there is such an option on photorec.
Can TestDisk or PhotoRec recover files by dates?
You could, but if any file crosses the split boundary, you won't be able to recover it. Of course, you could go for overlapping splits to avoid that problem (overlap size being at least the maximum file size you're expecting to recover). However, it's questionable at best whether you would actually see the desired speed increase. Suppose you want to go for splits with 4GiB size plus 128MiB overlaps: # first part: dd if=/dev/external1 of=split.img bs=1M count=$((4096+128)) photorec split.img# second part: dd if=/dev/external1 of=split.img bs=1M skip=4096 count=$((4096+128)) photorec split.imgAnd in the next step you skip=$((4096*2)) and so on... until you reach the end of the disk.
I'm running Photorec on an external HD connected via USB and it's taking too long, with the estimated remaining time constantly increasing. Is there a way to create a disk image from that external drive, split it in smaller parts, copy each part to my internal SSD (one at a time) and run Photorec on it? I don't have much free space on the SSD, so I can't have the entire image copied at once.
Can I split a disk image and run Photorec on each part individually?
From my personal experience most filecarvers like to make assumptions based on which filesystems they detect. You could create a loop device with 32GB offset just to make sure your scanner/filecarver does not get sidetracked by seemingly valid data you're not interested in (i.e. skip the area you know was overwritten). Note the offset should be 512 Byte or 4K aligned. If you know the exact size of bytes that were overwritten, use that instead. losetup --find --show --read-only --offset $((32*1000*1000*1000)) /dev/sdxThen you can use any program of your liking on that loop device. photorec is one, there are others like foremost, scalpel, ... knock yourself out. If you had partitions / logical volumes that started beyond the 32GB mark you can also try testdisk. If you had a filesystem with redundant metadata you could try to fix that too, but you need to know the original partition offset (e.g. 1 MiB) and some way to locate the metadata backups. This is best done with overlays https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file An overlay allows you to run fsck or some other programs that write on a virtual /dev/mapper/sdxoverlay device, without actually performing those writes on the real disk.
We had a human accident at home and an external hard disk was copied a SD 32gb image card by mistake. The copy should be done to a SD image. I guess 32gb of the external drive were overwritten, but as far as my knowledge of digital forensics, the most of the information should be there. Can you please recommend me a good low level scanner to analyse bit-to-bit the information in the whole hard disk to recover pictures and word files (mainly)? I used to know testdisk, is it valid for it? I hope you can give us a hand on this. Kindest regards
How to scan a hard disk at low level to recover files in partially overwritten partition?
No, it is not possible to change the physical geometry of a disk. Hard disks don't have any “physical geometry” that can be changed in software. Hard disks don't even report any physical geometry information to the computer —things like cylinders are a software abstraction that hasn't been related to the hardware since the late 1980s.
I don't actually want to do it because I am paranoid, but my question is: can I actually change my drive physical geometry using testdisk and damage a drive to the point of being unrecoverable? Does testdisk support this?
Can testdisk change physical geometry?
Just to bring this full circle for anyone who find this in the future. I spoke to a software engineer at my work, and he advised that I had already achieved the best outcome possible. I.e. recovering the data with photorec was the best I could hope for, that I wouldn't be able to recover the data within the folder structure, as the folder structure had been lost when I deleted the partition table. Hope that helps!
Please don't mark this as duplicate - I have read many other questions and proposed answers and can't seem to get them to work. I have 3 1TB hard drives, 2 SSDs and 1 HDD. I accidentally deleted the partition table and possibly formatted the HDD whilst trying to install Windows 10 on one of the SSDs. I'm trying to recover my data from the HDD, it has a lot of photos from about 10 years ago. HHD is just a data disk, although it briefly had a Linux installation on it in 2018. Disk Model: ST1000DM003-1SB1 I have been trying to recover the photos using TestDisk to mixed results. I have been unable to restore the partition table, however I know the data is still on the disk because I was able to extract it using Photorec. The only problem is that photo rec has removed all structuring and order on approx 8K photos. The results from TestDisk are as follows: Disk /dev/sda - 1000 GB / 931 GiB - CHS 121601 255 63 Current partition structure: Partition Start End Size in sectors 1 * FAT32 0 1 1 242 254 63 3903732 [RECOVERY] 2 P FAT32 LBA 243 0 1 7401 254 63 115009335 [NO NAME] 3 E extended LBA 7402 0 1 102927 254 63 1534625190 5 L FAT32 LBA 7402 1 1 12160 254 63 76453272 [NO NAME] X extended 102927 97 1 102927 232 38 8543 Invalid FAT boot sector 6 L FAT16 <32M 102927 98 62 102927 232 38 8419 6 L FAT16 <32M 102927 98 62 102927 232 38 8419Running fdisk -l returns: Disk /dev/sda: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: ST1000DM003-1SB1 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x17556621Device Boot Start End Sectors Size Id Type /dev/sda1 * 63 3903794 3903732 1.9G b W95 FAT32 /dev/sda2 3903795 118913129 115009335 54.9G c W95 FAT32 (LBA) /dev/sda3 118913130 1653538319 1534625190 731.8G f W95 Ext'd (LBA) /dev/sda5 118913193 195366464 76453272 36.5G c W95 FAT32 (LBA) /dev/sda6 1653528490 1653536908 8419 4.1M 4 FAT16 <32MPartition 1 does not start on physical sector boundary. Partition 2 does not start on physical sector boundary. Partition 3 does not start on physical sector boundary. Partition 5 does not start on physical sector boundary. Partition 6 does not start on physical sector boundary.Trying to mount returns: billy@billy-Mint:~$ ntfsfix -b /dev/sda3 Mounting volume... Error opening read-only '/dev/sda3': Permission denied FAILED Attempting to correct errors... Error opening read-only '/dev/sda3': Permission denied FAILED Failed to startup volume: Permission denied Error opening '/dev/sda3': Read-only file system Volume is corrupt. You should run chkdsk.sudo fsck /dev/sda3 yielded the following: fsck.ext2: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda3 Could this be a zero-length partition?Any help would be greatly appreciated. I'm stumped about how to proceed.
Recover Formatted NTFS Partion
The count of colors available to tput is given by tput colors. To see the basic 8 colors (as used by setf in urxvt terminal and setaf in xterm terminal): $ printf '\e[%sm▒' {30..37} 0; echo ### foreground $ printf '\e[%sm ' {40..47} 0; echo ### backgroundAnd usually named as this: Color #define Value RGB black COLOR_BLACK 0 0, 0, 0 red COLOR_RED 1 max,0,0 green COLOR_GREEN 2 0,max,0 yellow COLOR_YELLOW 3 max,max,0 blue COLOR_BLUE 4 0,0,max magenta COLOR_MAGENTA 5 max,0,max cyan COLOR_CYAN 6 0,max,max white COLOR_WHITE 7 max,max,maxTo see the extended 256 colors (as used by setaf in urxvt): $ printf '\e[48;5;%dm ' {0..255}; printf '\e[0m \n'If you want numbers and an ordered output: #!/bin/bash color(){ for c; do printf '\e[48;5;%dm%03d' $c $c done printf '\e[0m \n' }IFS=$' \t\n' color {0..15} for ((i=0;i<6;i++)); do color $(seq $((i*36+16)) $((i*36+51))) done color {232..255} The 16 million colors need quite a bit of code (some consoles can not show this). The basics is: fb=3;r=255;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm▒▒▒ ' "$fb" "$r" "$g" "$b"fb is front/back or 3/4. A simple test of your console capacity to present so many colors is: for r in {200..255..5}; do fb=4;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm ' "$fb" "$r" "$g" "$b"; done; echoIt will present a red line with a very small change in tone from left to right. If that small change is visible, your console is capable of 16 million colors. Each r, g, and b is a value from 0 to 255 for RGB (Red,Green,Blue). If your console type support this, this code will create a color table: mode2header(){ #### For 16 Million colors use \e[0;38;2;R;G;Bm each RGB is {0..255} printf '\e[mR\n' # reset the colors. printf '\n\e[m%59s\n' "Some samples of colors for r;g;b. Each one may be 000..255" printf '\e[m%59s\n' "for the ansi option: \e[0;38;2;r;g;bm or \e[0;48;2;r;g;bm :" } mode2colors(){ # foreground or background (only 3 or 4 are accepted) local fb="$1" [[ $fb != 3 ]] && fb=4 local samples=(0 63 127 191 255) for r in "${samples[@]}"; do for g in "${samples[@]}"; do for b in "${samples[@]}"; do printf '\e[0;%s8;2;%s;%s;%sm%03d;%03d;%03d ' "$fb" "$r" "$g" "$b" "$r" "$g" "$b" done; printf '\e[m\n' done; printf '\e[m' done; printf '\e[mReset\n' } mode2header mode2colors 3 mode2colors 4To convert an hex color value to a (nearest) 0-255 color index: fromhex(){ hex=${1#"#"} r=$(printf '0x%0.2s' "$hex") g=$(printf '0x%0.2s' ${hex#??}) b=$(printf '0x%0.2s' ${hex#????}) printf '%03d' "$(( (r<75?0:(r-35)/40)*6*6 + (g<75?0:(g-35)/40)*6 + (b<75?0:(b-35)/40) + 16 ))" }Use it as: $ fromhex 00fc7b 048 $ fromhex #00fc7b 048To find the color number as used in HTML colors format: #!/bin/dash tohex(){ dec=$(($1%256)) ### input must be a number in range 0-255. if [ "$dec" -lt "16" ]; then bas=$(( dec%16 )) mul=128 [ "$bas" -eq "7" ] && mul=192 [ "$bas" -eq "8" ] && bas=7 [ "$bas" -gt "8" ] && mul=255 a="$(( (bas&1) *mul ))" b="$(( ((bas&2)>>1)*mul ))" c="$(( ((bas&4)>>2)*mul ))" printf 'dec= %3s basic= #%02x%02x%02x\n' "$dec" "$a" "$b" "$c" elif [ "$dec" -gt 15 ] && [ "$dec" -lt 232 ]; then b=$(( (dec-16)%6 )); b=$(( b==0?0: b*40 + 55 )) g=$(( (dec-16)/6%6)); g=$(( g==0?0: g*40 + 55 )) r=$(( (dec-16)/36 )); r=$(( r==0?0: r*40 + 55 )) printf 'dec= %3s color= #%02x%02x%02x\n' "$dec" "$r" "$g" "$b" else gray=$(( (dec-232)*10+8 )) printf 'dec= %3s gray= #%02x%02x%02x\n' "$dec" "$gray" "$gray" "$gray" fi }for i in $(seq 0 255); do tohex ${i} doneUse it as ("basic" is the first 16 colors, "color" is the main group, "gray" is the last gray colors): $ tohex 125 ### A number in range 0-255 dec= 125 color= #af005f $ tohex 6 dec= 6 basic= #008080 $ tohex 235 dec= 235 gray= #262626
I am in the process of colorizing my terminal’s PS1. I am setting color variables using tput; for example, here’s purple: PURPLE=$(tput setaf 125)Question: How do I find the color codes (e.g. 125) of other colors? Is there a color table guide/cheat sheet somewhere? I’m just not sure what 125 is … Is there some way to take a hex color and convert into a number that setaf can use?
tput setaf color table? How to determine color codes?
The tput command is an excellent tool, but unfortunately it can't retrieve the actual settings for an arbitrarily selected terminal. The reason for this is that it reads stdout for the terminal characteristics, and this is also where it writes its answer. So the moment you try to capture the output of tput cols you have also removed the source of its information. Fortunately, stty reads stdin rather than stdout for its determination of the terminal characteristics, so this is how you can retrieve the size information you need: terminal=/dev/pts/1 columns=$(stty -a <"$terminal" | grep -Po '(?<=columns )\d+') rows=$(stty -a <"$terminal" | grep -Po '(?<=rows )\d+')By the way, it's unnecessarily cumbersome to write this as echo $(/usr/bin/tput cols). For any construct echo $(some_command) you are running some_command and capturing its output, which you then pass to echo to output. In almost every situation you can imagine you might as well have just run some_command and let it deliver its output directly. It's more efficient and also easier to read.
I'm encountering an issue where I am trying to get the size of a terminal by using scripts. Normally I would use the command tput cols inside the console, however I want to be able to accomplish this feature by strictly using scripts. As of now I am able to detect the running console and get its file path. However I'm struggling to use this information to get the console's width. I've attempted using the command tput, but I'm fairly new to Linux/scripts so therefore don't really know what to do. The reason for doing this is I want to be able to setup a cron entry that notifies the console of its width/columns every so often. This is my code so far: tty.sh #!/bin/bash#Get PID of terminal #terminal.txt holds most recent PID of console in use value=$(</home/test/Documents/terminal.txt)#Get tty using the PID from terminal.txt TERMINAL="$(ps h -p $value -o tty)" echo $TERMINAL#Use tty to get full filepath for terminal in use TERMINALPATH=/dev/$TERMINAL echo $TERMINALPATHCOLUMNS=$(/home/test/Documents/get_columns.sh) echo $COLUMNSget_columns.sh #!/usr/bin/env bash echo $(/usr/bin/tput cols)The normal output of TERMINAL & TERMINALPATH are pts/terminalnumber and /dev/pts/terminalnumber, for example pts/0 & /dev/pts/0
Getting console width using a bash script
In the ncurses terminal database, cvvis is used as documented in the terminfo manual page: cursor_visible cvvis vs make cursor very visibleand if there is no difference between normal and very visible, the cvvis capability is usually omitted. The feature is used in curs_set:The curs_set routine sets the cursor state to invisible, normal, or very visible for visibility equal to 0, 1, or 2 respectively. If the terminal supports the visibility re-quested, the previous cursor state is returned; otherwise, ERR is returned.The terminfo(5) manual page also saysIf the cursor needs to be made more visible than normal when it is not on the bottom line (to make, for example, a non-blinking underline into an easier to find block or blinking underline) give this sequence as cvvis. If there is a way to make the cursor completely invisible, give that as civis. The capability cnorm should be given which undoes the effects of both of these modes.Some terminal descriptions may (incorrectly) equate cvvis and cnorm, since some emacs configurations assume that cvvis is always set.
tput civissuccessfully hides the cursor. tput cvvisshould unhide it, but it doesn't. Any idea what the problem might be?
Hide and unhide cursor with tput
tput can handle expressions (for instance in sgr and setaf) which the typical shell-scripter would find less than usable. To get an idea of what is involved, see the output from infocmp with the -f (formatting) option applied. Here is one of examples using those strings from xterm's terminfo descriptions: xterm-16color|xterm with 16 colors, colors#16, pairs#256, setab=\E[ %? %p1%{8}%< %t%p1%{40}%+ %e %p1%{92}%+ %;%dm, setaf=\E[ %? %p1%{8}%< %t%p1%{30}%+ %e %p1%{82}%+ %;%dm, setb= %p1%{8}%/%{6}%*%{4}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, setf= %p1%{8}%/%{6}%*%{3}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, use=xterm+256color, use=xterm-new,The formatting splits things up - a script or program to do the same would have to follow those twists and turns. Most people give up and just use the easiest strings. The 16-color feature is borrowed from IBM aixterm, which maps 16 codes each for foreground and background onto two ranges;foreground onto 30-37, and 90-97 background onto 40-47, and 100-107A simple script #!/bin/sh TERM=xterm-16color export TERM printf ' %12s %12s\n' Foreground Background for n in $(seq 0 15) do F=$(tput setaf $n | cat -v) B=$(tput setab $n | cat -v) printf '%2d %12s %12s\n' $n "$F" "$B" doneand output show how it works: Foreground Background 0 ^[[30m ^[[40m 1 ^[[31m ^[[41m 2 ^[[32m ^[[42m 3 ^[[33m ^[[43m 4 ^[[34m ^[[44m 5 ^[[35m ^[[45m 6 ^[[36m ^[[46m 7 ^[[37m ^[[47m 8 ^[[90m ^[[100m 9 ^[[91m ^[[101m 10 ^[[92m ^[[102m 11 ^[[93m ^[[103m 12 ^[[94m ^[[104m 13 ^[[95m ^[[105m 14 ^[[96m ^[[106m 15 ^[[97m ^[[107mThe numbers are split up because aixterm uses the 30-37 and 40-47 ranges to match ECMA-48 (also known as "ANSI") colors, and uses the 90-107 range for codes not defined in the standard. Here is a screenshot with xterm using TERM=xterm-16color, where you can see the effect.Further reading:infocmp - compare or print out terminfo descriptions Parameterized strings, in the terminfo manual. tput, reset - initialize a terminal or query terminfo database ECMA-48: Control Functions for Coded Character Sets aixterm Command Aren't bright colors the same as bold? (XTerm FAQ)
In people's '.*rc' files I see online or in various code, I tend to see a lot of people who manually use ANSI escape sequences instead of using tput. I had the understanding that tput is more universal/safe, so this makes me wonder: Is there any objective reason one should use escape sequences in place of tput? (Portability, robustness on errors, unusual terminals...?)
Is there any objective benefit to escape sequences over tput?
The tput command uses terminfo (from man tput):tput [-Ttype] capname [parms ... ] (…) For a complete list of capabilities and the capname associated with each, see terminfo(5). (…) capname indicates the capability from the terminfo database. When termcap support is compiled in, the termcap name for the capability is also accepted.The civis and cnorm are documented in terminfo's man page: cursor_invisible civis vi make cursor invisi‐ ble cursor_normal cnorm ve make cursor appear normal (undo civis/cvvis)In general, when you don't know which man page describes a particular command, you can search for it. In this case, I found the above by running man -K civis.
I wanted to hide the cursor, and I was aware of tput command. I did search its man page. On searching the Internet, I found $ tput civis # to hide the cursor $ tput cnorm # to bring back the cursorThese work perfectly, but these options are not mentioned anywhere in the man page. Where are they officially documented?
Full list of tput options
You could use tput to move the cursor to a given line in the screen, e.g., tput cup 11 0to move it to the twelfth line (values count from zero). Along the same lines, you could use tput to clear from that position to the end of the screen, using the ed capability. Combining, tput cup 11 0 && tput edmight be what was wanted. If you want to go to the halfway mark on the screen, the first number returned by stty sizeis (on most systems) the number of rows of the screen. Adding that to the command: tput cup $(stty size|awk '{print int($1/2);}') 0 && tput edThe clear program differs from tput ed:it moves the cursor to the home position (upper left) and clears from that point to the end of the screen.Caveat: on some platforms tput ed may not work due to problems fixed long ago. In those cases, upgrading your curses/ncurses configuration will fix the problem.
Is there some way clear the terminal but instead of leaving the prompt at the top of the screen leaves it in the middle? It looks like clear basically ignores all command-line parameters. I thought there would be some way to do this with tput but can't find one.
Clear half of the screen from the command-line
As your shebang and syntax indicates unportable bash, I prefer it this way: function box_out() { local s="$*" tput setaf 3 echo " -${s//?/-}- | ${s//?/ } | | $(tput setaf 4)$s$(tput setaf 3) | | ${s//?/ } | -${s//?/-}-" tput sgr 0 }Of course, you can optimize it if you wish. Update as requested in comment, to handle multiline text too. function box_out() { local s=("$@") b w for l in "${s[@]}"; do ((w<${#l})) && { b="$l"; w="${#l}"; } done tput setaf 3 echo " -${b//?/-}- | ${b//?/ } |" for l in "${s[@]}"; do printf '| %s%*s%s |\n' "$(tput setaf 4)" "-$w" "$l" "$(tput setaf 3)" done echo "| ${b//?/ } | -${b//?/-}-" tput sgr 0 }Call it with multiple parameters, like box_out 'first line' 'more line' 'even more line'.
I have created this function, which prints the output as seen in the example image. But, the implementation of this function seems too complex. Is there a way that I can improve it or implement an alternative solution?#!/bin/bash function box_out() { input_char=$(echo "$@" | wc -c) line=$(for i in `seq 0 $input_char`; do printf "-"; done) # tput This should be the best option. what tput does is it will # read the terminal info and render the correctly escaped ANSI code # for you. # Code like \033[31m will break the readline library in some of the # terminals. tput bold line="$(tput setaf 3)${line}" space=${line//-/ } echo " ${line}" printf '|' ; echo -n "$space" ; printf "%s\n" '|'; printf '| ' ;tput setaf 4; echo -n "$@"; tput setaf 3 ; printf "%s\n" ' |'; printf '|' ; echo -n "$space" ; printf "%s\n" '|'; echo " ${line}" tput sgr 0 }box_out $@
bash script , echo output in box
I found two pieces of information here on the stackexchange network that helped me arrive at this working answer:https://stackoverflow.com/q/263890/5419599 https://stackoverflow.com/q/4409399/5419599However the code in this answer is my own. See the edit history if you want more verbosity; I've edited out all the cruft and "steps along the way."I think the best way is: center() { termwidth="$(tput cols)" padding="$(printf '%0.1s' ={1..500})" printf '%*.*s %s %*.*s\n' 0 "$(((termwidth-2-${#1})/2))" "$padding" "$1" 0 "$(((termwidth-1-${#1})/2))" "$padding" } center "Something I want to print"Output on a terminal 80 columns wide: ========================== Something I want to print ===========================Note that the padding doesn't have to be a single character; in fact the padding variable isn't, it's 500 characters long in the above code. You could use some other form of padding by changing just the padding line: padding="$(printf '%0.2s' ^v{1..500})"Results in: ^v^v^v^v^v^v^v^v^v^v^v^v^v Something I want to print ^v^v^v^v^v^v^v^v^v^v^v^v^v^Another handy use is: clear && center "This is my header"
How can I print $myvar padded so that it is in the center of the terminal, and to either side are = to the edge of the screen?
How can I print a variable with padded center alignment?
Your unit test script probably calls tput in order to generate pretty output showing which tests pass and fail. Under cron there is no terminal and thus no terminal type ($TERM), so tput cannot control the nonexistent terminal. Your unit test script needs to have 2 modes:running on a terminal: it can call tput to generate pretty-looking output not running on a terminal: it should not call tput and instead generate a generic text-only output format that is suitable for piping into an email as you are doing here.The easiest way for the unit tests to know whether or not they are running on a terminal is to test of the stdio file descritors refer to a terminal. If it's a shell script, then: if [ -t 1 ]; then tput bold; echo pretty; tput sgr0 else echo ugly fiBasically: do not call tput unless you are running on a terminal, and you will thus avoid the error you are getting, plus produce reasonable output in whichever mode you happen to be running under.
I am trying to run a bash script I have via cron, and I am getting the following error at the beginning of the execution: tput: No value for $TERM and no -T specifiedHere is what is in my crontab: 0 8 * * 1-5 cd /var/www/inv/ && /var/www/inv/unitTest run all 2>&1| mail -r "[emailprotected]" -s "Daily Inventory Unit Test Results" [emailprotected]
tput: No value for $TERM and no -T specified
You're making this a lot more complicated than it should be. You can handle alignment with $message and not care about the width of ANSI sequences: #! /usr/bin/env bashmessage() { [ x"$2" = xOK ] && color=2 || color=1 let offset=$(tput cols)-4-${#2} printf "%-*s[ %s%s%s ]\n" $offset "$1" "$(tput setaf "$color")" "$2" "$(tput sgr0)" } message "install font library" "OK" message "update configuration file on server" "ERR"Edit: Please note that most printf(1) implementations don't cope well with lengths calculations for multibyte charsets. So if you want to print messages with accented characters in UTF-8 you might need a different approach. shrug
I have a function in a bash script: message_offset which is used to print the status of a bash script. i.e. you would call it passing a message into it and a status, like this message_offset "install font library" "[ OK ]" and it would print into the terminal where printf's %*s format is used to always set the rightmost character of [ OK ] at 80 columns wide e.g. output would be install font library [ OK ] update configuration file on server [ ERR ] ^ | always at 80If echo was used output would look like this install font library [ OK ] update configuration file on server [ ERR ]code: #!/usr/bin/env bashfunction message_offset() { local message="$1" local status="$2" # compensate for the message length by reducing the offset # by the length of the message, (( offset = 80 - ${#message} )) # add a $(tput sgr0) to the end to "exit attributes" whether a color was # set or not printf "%s%*s%s" "${message}" 80 "$status" "$(tput sgr0)"}this all works ok, until I try to use tput to add some color sequences into the string, i.e. to make "[ ERR ]" red. It seems that the printf "%*s" formatting is counting the tput character sequences when its setting the offset, so if I call the function like this message_offset "update configuration file on server" "$(tput setaf 1)[ ERR ]"the output will look something like: install font library [ OK ] update configuration file on server [ ERR ]because printf "%*s" is saying hey this string has got all the "[ ERR ]" characters, plus the "$(tput setaf 1) chars, but obviously the "$(tput setaf 1) chars are not printed, so don't actually affect the padding. Is there a way I can add color the "status" messages, and also use the tput style color sequences?
Is it possible to change font colors in the terminal, without impacting printf's "%*s" padding format?
You could answer that by looking at the output of infocmp: using the -1 option to print one capability per line for a given setting of TERM (i.e., "ansi"), simplifying a grep: infocmp -1 ansi |grep sgr0=and seeing (for example) sgr0=\E[0;10m,which tells you that you could do printf '\033[0;10m'to get the same effect as tput -T ansi sgr0But you're not using ANSI, but rather some particular terminal, and though the ones you're using are similar, not all are identical. It's more likely that your TERM variable is set to something like "xterm", which (omitting the terminal type) gives sgr0=\E(B\E[m,The common part is the \E[m, which is the terminfo syntax for \033[m. The other characters in each sequence are resetting the alternate character set (and can depend on the terminal type).
To avoid a subshell, what are the ANSI escape equivalents of tput sgr0 for an ANSI-compatible terminal?
What is the ANSI escape equivalent of `tput sgr0`?
string=whatever stty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" "$((x - ${#string}))" # position cursor printf %s "$string" tput rc # restore cursor. }That assumes all characters in $string are one cell wide (and that $string doesn't contain control characters (like newline, tab...)). If your string may contain zero-width (like combining characters) or double-width ones, you could use ksh93's printf's %Ls format specifier that formats based or character width: string='whatéver' # aka string=$'\uFF57\uFF48\uFF41\uFF54\uFF45\u0301\uFF56\uFF45\uFF52' stty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" 0 # position cursor printf "%${x}Ls" "$string" tput rc # restore cursor. }That would erase the leading part of the last line though.
How do I output a string in the bottom right corner of the terminal?
Outputting a string in the bottom right corner of the terminal
After more googling and scouring the documentation (mainly terminfo), I finally figured out that I need to fall back to the older termcap code since the capname is not supported for all terminfo capabilities. ed=$(tput ed || tput cd)
The output of tput ed is empty and I can't figure out why. Other capabilities work fine. Also ed is not missing from infocmp output so tput should match, right? $ printf '%q' "$(tput ed)" ''$ printf '%q' "$(tput home)" $'\033'\[HI'm using zsh on Mac OS 10.14.6 and iTerm2. TERM=xterm-256color.
tput ed is empty
If you check tput colors, you'll probably see that the answer is 8. The way to show the bright colors is by tput bold. This shows all 8x8x2 combinations of foreground and background, normal and bold. for cmd in sgr0 bold; do tput $cmd for i in $(seq 0 7); do for j in $(seq 0 7); do tput setaf $i; tput setab $j; echo -n " $i,$j " done tput sgr0; echo; tput $cmd done done
Wrote a bash function to display the console colors. But, it seems some of the colors are not possible to show that way! (?) Also, note the strange "bright black"! (Note: The below screendump is of xterm, but this is meant to be used in console mode. I had to use xterm to get a screendump. But it looks much the same.)function clr () { Color_names="bla red gre yel blu mag cya whi" Color_arr=($Color_names) for m in {0..15} do tput setaf $m echo -n ${Color_arr[$((m % 8))]}" " done echo tput sgr0 cat /sys/module/vt/parameters/default_red \ /sys/module/vt/parameters/default_grn \ /sys/module/vt/parameters/default_blu | tr "," "\t" }
Print console colors
terminfo is probably not going to help you much. In most terminal emulators, you can configure with M-x sends <Esc>x or x with the 8th bit set, and the terminfo entry won't magically be updated when the user does so. Also, most character sets are 8bits now, so it doesn't make much sense nowadays to use that 8th bit for Meta. I would just hardcode M-x == <Esc>x (the sequence of two characters \033 and x). This way, even if the terminal doesn't support that mode, the user can still type Esc, x in sequence.
I want to redefine keybindings for the commandline of a lisp dialect. The commandline is actually modeled after Vi and implemented with tput/terminfo. I want an alternative version with Emacs keybindings, and I would like to give it a try without using gnu readline or so. A typical key assignment looks like this: (setq *XtF1 (in '("tput" "kf1") (line T)) ...This question is about the ("tput" "kf1")part, ignore the rest. Now, "kf1" is easy to find in the terminfo man-page: key_f1 kf1 k1 F1 function keyI find there 3 entries for "meta" too: has_meta_key km km Has a meta key (i.e., sets 8th-bit) meta_off rmm mo turn off meta mode meta_on smm mm turn on meta mode (8th-bit on)But how can I use this info to define typical Emacs keybindings like e.g. M-f, M-b, M-d etc. with tput?
How to use the Meta/Alt Key with tput?
TL;DR The suggested check will fail to work on some kinds of Unix systems. Long answerPOSIX doesn't cover this aspect of tput, but read the Portability section of the manual page. Most systems do use terminfo names (such as setaf). FreeBSD for instance (although the underlying library is ncurses) provides a tput that only supports termcap names. The FreeBSD manual page is not clear, but see the source-code, which uses tgetstr (termcap) rather than tigetstr (terminfo). the terminfo manual page lists the terminfo and termcap names, e.g., setaf and AF. You could try using both in succession since the exit code tells whether the capability is defined. For example, you could modify your test function to handle either case this way: colors_supported() { command -v tput > /dev/null 2>&1 && ( tput setaf || tput AF ) > /dev/null 2>&1 }
Test for terminal color support using tput #!/bin/sh colors_supported() { command -v tput > /dev/null 2>&1 && tput setaf 1 > /dev/null 2>&1 }I am currently using this code to establish if tput terminal colors are supported. It seems to work well, the question is: Is this piece of code POSIX-ly 100% portable and usable on all kinds of Unix-based systems? I mean Linux, *BSD, etc.I ask because I can't try it out everywhere at this moment.Which colors: tput setaf 1 tput setaf 2 tput setaf 3 tput setaf 4 tput setaf 5 tput setaf 6 tput setaf 7
All-round 100% portable terminal color support check
One way to do it could be to set a tab stop at that position: trap 'tabs -8' EXIT INT TERM # restore to default tab stops every 8 columns # upon exit or signaltput tbc # clear tab stops tput cup 5 15 tput hts # set one and only tab stop printf '\r' # move back to the beginning of the line printf '\t%s\n' foo bar baz ping www.google.com | paste /dev/null -That does affect the behaviour of the terminal and could cause problems when suspended for instance. Advantages over @Thor's cup based approach is that it sends less output (not really a concern unless you're on a 300 baud satellite link) and behaves more gracefully if some other process like syslog is also writing text to the terminal. Another approach to makes sure each line starts at position 15, would be to prefix each line with \r$(tput cuf 15): tput cup 5 15 ping www.google.com | PREFIX=$(tput cr; tput cuf 15) awk ' {print ENVIRON["PREFIX"] $0}'See also the csr capability to set a scrolling region. If using zsh, see also its zcurses builtin
I would like to position the cursor at a specific row/column and print a multi-line file/command that stays aligned to its first coordinate, so that tput clear tput cup 5 15 ping www.google.comwould output all subsequent lines in the 15th column. As it is, the first line prints correctly but the following lines are reset to the left. Is it possible using tput or any other method?
Is multi-line alignment possible with tput?
short: tput doesn't do that. longer: your script can do that For example, check if the standard output is a terminal: if [ -t 1 ] then color=$( tput setaf 1 ) normal=$( tput sgr0 ) else color="" normal="" fi
I have a shell function that colorizes a filename using tput. color=$( tput setaf 1 ) normal=$( tput sgr0 ) colorize() { echo "${color}$1$(normal)" }When I call the function, the terminal displays the colorized filename as expected. $ myScript.sh . /path/to/dir # (in color red)Then, if I redirect output to a file: $ myScript.sh > /tmp/file.logfile.log still contains escapes sequences like: ^[[36m~/path/to/my/file/^[(B^[[[mCould this be related to the TERM and infocomp, and how terminal interprets escape sequences? The idea is to simulate a "non-display" terminal, isn't it? My TERM is (Ubuntu 16.04): $ echo $TERM xtermWhat should I do to prevent such tput escape sequences when my script is redirected to a file? Workaround: add an option to my script to disable colorization manually, as in ls or grep like --color=none.
Prevent tput escape sequences when redirecting script output to a file
IIUC: echo ${list1[*]} "$(tput bold)" ${list2[*]} "$(tput sgr0)"
we have the follwing array - list1 and list2 list1="node23 node76 node 34" list2="node02 node2 node9 node21"we want to bold the list2 so list1 will printed without bold and list2 with bold example echo ${list1[*]} ${list2[*]}node23 node76 node 34 node02 node2 node9 node21 so how to insert tput bold in echo so list2 will be bold?tput bold - will bold the words tput sgr0 will return the bold to normal
linux + how to use the tput bold in echo command
zsh outputs its completions below the prompt, so it makes sure that area is clear. I don't think you can disable it. However you can tell zsh that the escape sequence to clear until the end of the screen is the empty string. infocmp -x | sed 's/ed=[^,]*/ed=/' | TERMINFO=~/.zsh-terminfo tic -x -Then, you can start zsh with: TERMINFO=~/.zsh-terminfo zshAnd unset TERMINFO later, but you'll find that the completion is never cleared which makes it awkward to use.
Using the zsh shell, I've noticed a difference in the behavior of the tput command compared to bash. For instance With zsh, if you have many things output to a terminal emulator window and you're at the "bottom", if you do tput cup 1, you go back up and everything below is cleared. With bash the output remains and is not cleared (on my setup). This comparison is valid for every terminal capability that moves the cursor around (cuu, cud, sc & rc, home, etc.). Do you have any control over that behavior in zsh i.e. not clearing? I'm looking at the possible options, but there's lots of things... If you cannot alter the behavior, can you use some other command which doesn't behave like such?
Cursor movement related tput commands under zsh: can the 'clear' behavior be configured?
tput is not working because it needs to know the current terminal emulator from $TERM environment variable. When ~\.profile is reading, there's no terminal emulator in use, so tput can't produce any output. It is possible to specify which terminal capability to use in tput by -T key. So this code will work: LESS_TERMCAP_mb=$(tput -T ansi blink) # start bold LESS_TERMCAP_md=$(tput -T ansi setaf 2 ; tput -T ansi bold) # start bold LESS_TERMCAP_me=$(tput -T ansi sgr0) # turn off bold, blink and underline LESS_TERMCAP_so=$(tput -T ansi smso) # start standout (reverse video) LESS_TERMCAP_se=$(tput -T ansi rmso) # stop standout LESS_TERMCAP_us=$(tput -T ansi smul) # start underline LESS_TERMCAP_ue=$(tput -T ansi rmul) # stop underline export LESS_TERMCAP_mb export LESS_TERMCAP_md export LESS_TERMCAP_me export LESS_TERMCAP_se export LESS_TERMCAP_so export LESS_TERMCAP_ue export LESS_TERMCAP_us
This is my ~/.bashrc: # ...unnecessary lines skipped... # man colors LESS_TERMCAP_mb=$(tput blink) # start bold LESS_TERMCAP_md=$(tput setaf 2 ; tput bold) # start bold LESS_TERMCAP_me=$(tput sgr0) # turn off bold, blink and underline LESS_TERMCAP_so=$(tput smso) # start standout (reverse video) LESS_TERMCAP_se=$(tput rmso) # stop standout LESS_TERMCAP_us=$(tput smul) # start underline LESS_TERMCAP_ue=$(tput rmul) # stop underline export LESS_TERMCAP_mb export LESS_TERMCAP_md export LESS_TERMCAP_me export LESS_TERMCAP_so export LESS_TERMCAP_se export LESS_TERMCAP_us export LESS_TERMCAP_ueThis works, I can see colors in man pages. But when I move that lines from ~/.bashrc to ~/.profile (and re-login), the colors in man pages disappears. I really want to use tput because it is more clear than a heap of control symbols. Why do tput not working from .profile?
Setting LESS_TERMCAP_* variables with $(tput ...) in ~/.profile not working
What you want is a program that understands these terminal control sequences, and is able to render the final view. Well, such a program is called a terminal emulator. Some of them are graphical – like the program you launch to use your shell, e.g., gnome-terminal or alacritty, others are primarily headless. The older screen or the more modern tmux are the relevant ones here.write an "outer" script:create a named pipe Start your "inner" script (so the one that outputs stuff) in tmux, in the background in your outer script, read from the fifo (this blocks because nothing has been written), once that read finishes, instruct tmux to output a screenshotin your inner script, write something to the named pipe to signal you're at a state to be taken a screenshot ofPutting it together, something like #!/usr/bin/zsh # outer script # SPDX-License-Identifier: WTFPL# Make a temporary directory tmpdir="$(mktemp -d)"# Prepare the FIFO fifo="${tmpdir}/fifo" mkfifo "${fifo}"# Start the inner script in tmux tmux new-session -d -s "${tmpdir}" -e "FIFO=${fifo}" ./inner-script.sh … #^ ^ ^ ^------+-----^ ^------+--------^ ^ ^ #| | | | | | | #\------run tmux, the terminal emulator # | | | | | | # \---in tmux, run the "new-session" command to, well, get a new session # | | | | | # \---detach that session, i.e. *don't* connect it to the current terminal # | | | | # \--specify the session name. Conveniently, we abuse the name # of our temporary directory, as it's inherently unique # | | | # \-- for the started command, set the environment # variable FIFO to the name of our FIFO # | | # \-- launch your script # …with its arguments--/# Wait for something to be written to our FIFO cat "${fifo}" > /dev/null# instruct tmux to take a "pane shot" tmux capture-pane -t "${tmpdir}" -p > "/path/to/capture" # ^------+-----^ ^ ^---------+--------^ # | | | # \-------------------------------- target (session name as above) # | | # \----------------------- print to stdout # | # \----------- redirect stdout to file# Finally, clean up session and temporary directory tmux kill-session -t "${tmpdir}" rm -rf "${tmpdir}"You only need to add the writing of something to the fifo to your inner-script.sh, e.g., echo done > "${FIFO}"; sleep 100. If you already have a "recorded" output, your inner-script.sh might simply be cat recording.txt; echo done > "${FIFO}"; sleep 100
I captured the output of a script that uses tput to draw certain things on screen. When I perform cat myoutput then everything is well seen (looks like terminal reinterprets it from beginning), but when I edit or pipe that output I see plenty of ansi sequences and all the stuff previous to destructive printing like tput clear and the like. How can I a postprocess it so I only get the final "render"? Even better, the origin of this is that I am currently teeing my script so it prints everything to a file aside from to the terminal with exec > >(tee /dev/tty) is there a way to tell the stdout channel to "render" everything before saving?
How to "render" ouput from a command playing with tput so only the final terminal-postprocessed result is kept? [duplicate]
These statements are not equivalent. The first is checking whether $? is a non-empty string; it always will be, so the body of the first if will always run. [string] isTrue if the string string is not the null string; otherwise, false.The second is checking whether tput setaf 1 succeeded, or alternatively, whether its exit status (and therefore $?) is zero. The body of the second if will only run when the tput command succeeds. It is broadly equivalent to tput setaf 1 if [ $? -eq 0 ] ...The result you're describing indicates that tput setaf 1 is exiting with some non-zero code (for example, 1). "1" is a non-empty string, but it isn't zero. tput setaf 1 ; echo $? would show you what the actual code is, which you could then investigate further.
I'm trying to use the tput command in a bash script if statement but for some reason it's behaving unexpectedly. When running the command and then checking the exit code only in the if statement things work correctly, however, when running the command in the if statement directly things no longer work. Here's a simple script to illustrate my issue: #!/bin/bashtput setaf 1 if [ $? ] then echo "first if works" fiif tput setaf 1 then echo "second if works" fiRunning this script only prints "first if works". Shouldn't both of these if statements work since they are equivalent? Am I missing something? BTW, this is running on FreeBSD in case that matters.
tput command doesn't work in if statement
When connecting via ssh, environment variables may (or may not) be passed to the remote application. Also a "WIN application engine" could very well not set TERM at all. If TERM is putty (or xterm, for that matter), these have the same effect: tput setaf 2 tput -T putty setaf 2since the control sequences used for setaf are the same. Likewise, if TERM is linux, these are the same tput setaf 2 tput -T linux setaf 2The setaf is used for setting the foreground (text) to a particular value using ANSI (x3.64) escape sequences. Most of the terminals you are using do that — or some do not recognize any of those escape sequences. Since the application was not mentioned, you will have to experiment to see if the "WIN application engine" recognizes those escape sequences. If it does, it probably uses the same ANSI escapes, so you could just do tput -T xterm setaf 2(There are other differences between putty, linux and xterm, of course).
I use in my bash script the tput command in order to colored the text as tput setaf 2when I run the script from putty or console every thing is ok but when I run some external WIN application engine that run the script via SSH the we get the following error on tput tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specifiedplease advice what need to set ( ENV or else ) in my bash script in order to use the tput command ?
linux + tput: No value for $TERM and no -T specified
tput lines and tput cols return the current size of the window in character units. If you drag the window to change its size, you will (usually) see the new values change interactively. They have nothing to do with the cursor position. Also, the top left of the window is (0,0). Your bottom right is currently (42,140), because the positions are zero-based. tput cub1 should move left. All of cub1, cuf1, cuu1, cud1 are single words -- no spaces. There is no cub2 etc. Also, tput cud1 moves the cursor to the first column of the line below: tput cuu1 stays in the same column of the line above. (This may be a bug in my configuration, though.) I don't think there is a tput option to read the cursor position. It is considered to be the programmer's responsibility to keep track of where it ended up, or to move the cursor to where the next text output should be placed. You can save one location with tput sc, and restore that location (maybe multiple times) with tput rc. cursor_address (cup), column_address (hpa) and row_address (vpa) take absolute row and column numbers (zero based) and are more useful than the one-char moves. The full set of terminal commands should be documented in man -s 5 terminfo, but tput is a fairly clunky tool (a new process for every command), so ncurses (or at least C) is needed for any serious work. This is my brief check on how these work on my TERM=xterm-256color. #! /bin/bashclear r=$( tput lines ); c=$( tput cols ) for (( k = 1; k <= $(( r * c )); ++k )); do printf '.'; done tput cup 0 0 printf 'At 0,0' sleep 2for ((k = 1; k <= 20; ++k)); do tput cud1; done for ((k = 1; k <= 20; ++k)); do tput cuf1; done printf 'At 20,20' sleep 2tput cuu1; for ((k = 1; k <= 8; ++k)); do tput cub1; done printf 'Up one: Hello, World' sleep 2tput cup 10 10 printf 'At 10,10' sleep 2tput cud1; tput cud1; printf 'Down two: Hello, World' sleep 5tput cup $(( r - 1 )) 0 printf '\n\n\n\n\n'EDIT: You can optimise the code by asking tput once for each sequence you expect to use, and having a function that does repeats. This will avoid calling so many external processes, and make the code simpler and more readable. The code below illustrates these optimisations. There is a fair amount of one-time code to fetch the tput sequences and define utility functions. After that, the actual display work is brief, efficient, and fairly readable. Process substitution does not work well on these sequences: for example, it removes newlines. The method below (read, refusing all delimiters) is robust. #! /bin/bashTget () { #:: Args (var, seq) -- Get a tput outcome. declare -n var="$1"; shift IFS='' read -d '' -r var < <( tput "${@}" ) }Tget lines lines; Tget cols cols; lines=$(( lines )); cols=$(( cols )); #.. Bug with newlines. Tget tUp cuu1; Tget tDown cud1; Tget tBack cub1; Tget tFrwd cuf1; Tget tGoTo cup 35 37; tGoTo="${tGoTo//3?/%d}" #.. Parameterise the row and col numbers.GoTo () { #:: Args (line, col) -- one-based. printf "${tGoTo}" "$1" "$2" } Put () { #:: Args (n, str) -- Repeat a string n times. typeset k for (( k = 1; k <= "$1"; ++k )); do printf '%s' "${2}"; done } At () { #:: Args (line, col, text) GoTo "$1" "$2"; printf '%s' "$3" }#.. Demonstrate the resources. GoTo 1 1; Put $(( lines * cols )) "." At 1 11 'At 1,11: My Title Goes Here '; sleep 2 Put 20 "$tDown"; Put 15 "$tFrwd"; printf 'At 21,16'; sleep 2 Put 1 "$tUp"; Put 8 "$tBack"; printf 'Up one: Hello, World'; sleep 2 At 10 10 'At 10,10: Updated This Space'; sleep 2 At 10 10 'At 10,10: Updated Again '; sleep 2 Put 2 "$tDown"; printf 'Down two: Goodbye, World'; sleep 5 GoTo "$lines" 1; Put 5 "$tDown"; sleep 2 printf "That's All, Folks!" Put $(( lines - 30 )) "$tDown"
ive written the following script to print out the current line and row every time it changes aka every time i press up,down,left or right but the output is always the same: row:43:col:141i believe this means top left of the screen where 0 0 would be bottom right, but im not too sure. heres my script: #!/bin/bashecho "press a key" tput clear row=$(tput lines) col=$(tput cols) while true; do echo "row:$row:col:$col" K1=,K2=,K3= read -s -N1 K1="$REPLY" read -s -N1 -t 0.001 K2="$REPLY" read -s -N1 -t 0.001 K3="$REPLY" key="$K1$K2$K3" case "$key" in $'\033[A') tput cuu 1;; $'\033[B') tput cud 1;; $'\033[C') tput cuf 1;; $'\033[D') tput cup $row $(($col-1));; *) echo "pressed a key!";; esac row=$(tput lines) col=$(tput cols) doneexit 0apparently theres no way to move the cursor left using tput, so I used: tput cup $row $(($col-1))but this doesn't work either, any ideas how to fix?
bash script to get current line and row not working
On terminal emulators which support it, you can use the \033[?7$p escape ("Request DEC private mode") to query that parameter (7 => Auto-wrap Mode): decrqm()( exec </dev/tty t=$(stty -g) trap 'stty "$t"; return' EXIT QUIT INT TERM stty -icanon -echo time 1 min 0 e=$(printf '\033') printf "${e}[$1\$p" >/dev/tty case $(dd count=1 2>/dev/null) in "${e}[$1;1\$y") echo on;; "${e}[$1;2\$y") echo off;; *) echo unknown;; esac )$ tput smam # printf '\033[?7h' $ decrqm '?7' on $ tput rmam # printf '\033[?7l' $ decrqm '?7' offA better approach would be to save that setting upon starting the script with \033[?7s and restore it upon exiting with \033[?7r: save_am(){ printf '\033[?7s'; } restore_am(){ printf '\033[?7r'; }save_am tput rmam .. restore_amBut many terminal emulators (notably screen and tmux) do not support those escapes. At least not by default. So all this is pure trivia -- it's not like you can use it for anything practical ;-)
How can one retrieve status of terminal settings like smam and rmam ? Reason is that I set rmam by: tput rmamin script, then proceed to set smam on exit: tput smamBut if terminal has rmam set when script starts, I do not want to set smam on exit.How can this be done?
Get terminal status for termcap capabilities
You're looking for the erase character(s) control sequence. tput cup 5 10 ; tput ech 59This is not to be confused with delete character(s). tput cup 5 10 ; tput dch 59Erasure overwrites with blanks. Deletion moves the rest of the line left.
I need to clear string in terminal between two positions (not the beginning or end of line). For example, delete all starting from tput cup 5 10 and till tput cup 5 69. One line. Yes, it's possible simply echo by spaces all this region, but, maybe, is there a better way? Something like clear 10 69 on current cursor line position.
bash: clear string between two positions
sgr is short for "Select Graphic Rendition", which is also known as "Set Attribute" (see the terminfo(5) manual; man 5 terminfo). What you are most likely using is sgr0 though, which resets all set attributes.
Does anybody know what sgr stands for when using tput? It is supposed to reset the colour scheme to its default colour.
Deciphering the meaning of the `sgr` variable in `tput`
The basic limitation is when turning bold off. Some terminals support the ECMA-48 control SGR 22 (neither bold/faint, does not affect color). HoweverThere is no predefined capability for bold-off in terminfo or termcap (see manual page). There also is no distinction between turning bold off versus colors off.For portability, you would have to take that into account (turning colors back on if you turned bold off not intending to affect color).
Suppose I have the following color support in one of my portable shell scripts: #!/bin/shset -o nounsettput_init_linux () { set_fg_color='tput setaf'; reset_color=$(tput sgr0 2>/dev/null); } tput_init_bsd () { set_fg_color='tput AF'; reset_color=$(tput me 2>/dev/null); } tput_init_none () { set_fg_color=':'; reset_color=; }if tput setaf 1 >/dev/null 2>&1; then tput_init_linux || tput_init_none; elif tput AF 1 >/dev/null 2>&1; then tput_init_bsd || tput_init_none; else tput_init_none; fino_color () { printf '%s' "$reset_color"; }colorize () { #tput bold case "$1" in (red) $set_fg_color 1 ;; (green) $set_fg_color 2 ;; (yellow) $set_fg_color 3 ;; (blue) $set_fg_color 4 ;; (magenta) $set_fg_color 5 ;; (cyan) $set_fg_color 6 ;; (white) $set_fg_color 7 ;; (*) printf '%s\n' "[ERROR] This color ('$1') is not supported by the colorize() function. Quiting!" >&2; exit 1 ;; esac }print_ok () { colorize green; printf '%s' '[OK] '; no_color; } print_notice () { colorize cyan; printf '%s' '[NOTICE] '; no_color; } print_debug () { colorize yellow; printf '%s' '[DEBUG] ' >&2; no_color; } print_error () { colorize red; printf '%s' '[ERROR] ' >&2; no_color; }One rather silly example of use follows: grub_config_file=/boot/grub/grub.cfg readonly grub_config_fileif [ ! -f "$grub_config_file" ]; then print_error; printf '%s\n' "GRUB config file not found at $grub_config_file. Aborting!" >&2 exit 1 else print_ok; printf '%s\n' "GRUB config file was found at $grub_config_file. Searching for Windows..." fiNow, my question is about the bold text. Specifically, I am unsure if terminfo/termcap tput bold/tput md are portable or not, and if not what are the limitations of bold text? Thank you for your time.
terminfo/termcap `tput bold`/`tput md`: portability of bold text
Assuming I checked the correct pyenv source I would recommend to set PYENV_VIRTUALENV_DISABLE_PROMPT=1 and then build your own custom PS1 without interference from pyenv. The activated environment should be available as PYENV_VIRTUAL_ENV (or you check what is set after activate with env). https://github.com/pyenv/pyenv-virtualenv/blob/c85c1c9ed520d28cbb34e10e1fe82c6ee37130bc/bin/pyenv-sh-activate#L244 Also quite interesting. You can find quite allot of questions/discussions around this variable and the "deprecation" of prompt modification. For example https://stackoverflow.com/questions/74611317/enable-pyenv-virtualenv-prompt-at-terminal This deprecation was recently removed and pyenv plans to keep the prompt modification (including this way to disable it). See https://github.com/pyenv/pyenv-virtualenv/commit/77a7644ff7a7a624ba4dfebe5c7441a57be3cc48. PS: assuming $() works the same in ksh as in bash adding the tput as command instead of it's output is probably unnecessary bad for your shell performance.
In OBSD ksh, I had a PS1 which prepended a blank line: PS1="\n[\u@\h] \w\n\$"The problem with this was that pyenv prepends the name of a virtual environment when that environment is activated. Here is the relevant bit of the pyenv activation script: PS1="(porcupine) ${PS1:-}"This eats the blank line, turning [myusername@myhostname] ~/some/path $ echo foo foo[myusername@myhostname] ~/some/path $into (virtual-environment-name) [myusername@myhostname] ~/some/path $ echo foo foo (virtual-environment-name) [myusername@myhostname] ~/some/path $This is annoying, so I tried to fix it with tput: PS1_TEXT="[\u@\h] \w\n\$ " PS1="$(tput sc il1 nw)$(tput rc DO 1)\n$PS1_TEXT"This moves the cursor to the row above the virtual environment name, makes it a blank line, then returns and sticks in my original PS1. This almost works, but when I hit the bottom of the terminal window, the virtual environment name vanishes. This is annoying, again.
Adding a newline above PS1 that survives prepending
No, tput won't do this (normally) because you're likely to use only the predefined/standard terminal capabilities, which don't have that combination. With ncurses, you could define your own terminal description with a user-defined capability, and tput would work with that. Something like this would combine colors: infocmp -x >foo printf '\tfgbg=\\E[3%p1%;4%p2%dm,\n' >>foo sudo tic -x fooThen (the example is for 8-colors) tput fgbg 4 0would set the foreground to blue and background to black. However, there are a lot of possibilities. Using the predefined capabilities involves less effort.
For example, if I want the prefix for bold green, I might do Green=$(tput bold; tput setaf 2). This would set $Green to "\E[1m\E[32m". But on an Ansi terminal, it could just as easily have been "\E[1;32m". Can this be done with tput, or am I asking too much?
Can tput be used with combined capabilities?
tput bold writes the character sequence that is to be used to tell the current terminal it is running in to start writing in bold. It knows the type of the terminal based on the value of the $TERM environment variable. That variable is set by terminal emulators or by getty. tput queries the termcap or terminfo databases to know what sequence to use for a given attribute for a given terminal type. For instance, when running in an xterm, where $TERM will be something like xterm or xterm-256color, tput bold will write \e[1m which is the sequence recognised by xterm (and most modern terminal emulators) to start writing in bold. When running in an hpterm, it will send \e&dB instead. When a script is running from cron, it is not running in a terminal. If you want it to send a sequence to enable the bold attribute, you need to tell it for what terminal that should be, by setting the $TERM environment variable. Maybe something like: export TERM="${TERM-xterm}" # set $TERM to xterm if not set printf '%s\n' "$(tput bold)start write to log$(tput sgr0)" >> /tmp/log.txtThen your /tmp/log.txt will contain the xterm sequence to turn bold on. When the content of the file is sent to a xterm terminal emulator, it will be displayed in bold, YMMV for other terminals.
I comeback with a very strange behavior when we run this command on Linux redhat machine echo "$(tput bold)" start write to log "$(tput sgr0)" >> /tmp/log.txtwe get ended bold text in /tmp/log.txt more /tmp/log.txt start write to log <----- BOLD TEXTbut when we run it from cron job under /etc/cron.d */1 * * * * root [[ -f /home/mng_log.sh ]] && echo "$(tput bold)" start write to log "$(tput sgr0)" >> /tmp/log.txtthen the text in /tmp/log.txt isn't bold why the cli from cron job not write the bold test ?
tput in a cron job does not output bolded text [duplicate]
No: s/smso/so/ (termcap-names are always 2 characters). The terminfo(5) manual page is the place to go for answering your question as stated. The manual page shows this: enter_bold_mode bold md turn on bold (extra bright) modeso that md would be used for turning on bold mode. smso (terminfo) or so (termcap) turns on standout mode, which you can see later in the manual page is not necessarily the same as bold:If your terminal has one or more kinds of display attributes, these can be represented in a number of different ways. You should choose one display form as standout mode, representing a good, high contrast, easy-on-the-eyes, format for highlighting error messages and other attention getters. (If you have a choice, reverse video plus half- bright is good, or reverse video alone.) The sequences to enter and exit standout mode are given as smso and rmso, respectively. If the code to change into or out of standout mode leaves one or even two blank spaces on the screen, as the TVI 912 and Teleray 1061 do, then xmc should be given to tell how many spaces are left.
Can anyone on *BSD with GUI please confirm, that the below works on a BSD system correctly or not? I was trying to create a virtual *BSD test box, but ended up unsuccessful. So, I drained some available resources for correct tput sequences on *BSD... Thank you!#!/bin/shif tput setaf > /dev/null 2>&1; then # Linux tput tput_number_of_colors=$(tput colors) tput_cmd_set_fg_color='tput setaf' tput_bold=$(tput bold) tput_reset=$(tput sgr0) elif tput AF > /dev/null 2>&1; then # BSD tput tput_number_of_colors=$(tput Co) tput_cmd_set_fg_color='tput AF' tput_bold=$(tput smso) tput_reset=$(tput me) else # black & white console, or no tput tput_number_of_colors=2 tput_cmd_set_fg_color= tput_bold= tput_reset= fitput_test () # this function uses the above to test tput capability of the terminal { { command -v tput && [ "$tput_number_of_colors" -ge 8 ] && $tput_cmd_set_fg_color 1 && echo "$tput_bold"; } > /dev/null 2>&1 }if tput_test; then # example of bold color definition color_red=$tput_bold$($tput_cmd_set_fg_color $color_red_id) fi
BSD (& Linux) tput - basic interoperable color setup
As ckhan mentioned, jstack is great because it gives the full stack trace of all active threads in the JVM. The same can be obtained on stderr of the JVM using SIGQUIT. Another useful tool is jmap which can grab a heap dump from the JVM process using the PID of the process: jmap -dump:file=/tmp/heap.hprof $PIDThis heap dump can be loaded in tools like visualvm (which is now part of the standard Oracle java sdk install, named jvisualvm). In addition, VisualVM can connect to the running JVM and display information about the JVM, including showing graphs of internal CPU usage, thread counts, and heap usage - great for tracking down leaks. Another tool, jstat, can collect garbage collection statistics for the JVM over a period of time much like vmstat when run with a numeric argument (e.g. vmstat 3). Finally, it is possible to use a Java Agent to push instrumentation on all methods of all objects at load-time. The library javassist can help to make this very easy to do. So, it is feasible to add your own tracing. The hard part with that would be finding a way to get trace output only when you wanted it and not all the time, which would likely slow the JVM to a crawl. There's a program called dtrace that works in a manner like this. I've tried it, but was not very successful. Note that agents cannot instrument all classes because the ones needed to bootstrap the JVM are loaded before the agent can instrument, and then it's too late to add instrumentation to those classes. My Suggestion - start with VisualVM and see if that tells you what you need to know since it can show the current threads and important stats for the JVM.
As a sysadmin I sometimes face situations, where a program behaves abnormally, while not creating errors at all or creating nonsense error-messages. In the past - before java came in - there were two counter-measures:If nothing else helps - RTFM ;-) If even 1. does not help - trace the system-calls and see what is happeningI usually use strace -f for this task with Linux (other OS have similar trace-tools). Now while this usually works well for any old-fashioned program, the trace gets very fuzzy when doing the same on a java-process. There are so many system-calls seemingly unrelated to any real action, that it is terrible to search through such a dump. Are there better ways to do that (if the source-code is not available)?
How to trace a java-program?
On Linux, assuming you want to know what is writing to the same resource as your shell's stdout is connected to, you could do: strace -fe write $(lsof -t "/proc/$$/fd/1" | sed 's/^/-p/')That would report the write() system calls (on any file descriptor) of every process that have at least one file descriptor open on the same file as fd 1 of your shell.
I have two instances of a process running. One of them is "frEAkIng oUT!" and printing errors non stop to STDOUT. I want to kill the broken process but I have to make sure I don't terminate the wrong one. They were both started about at the same time and using top I can see they both use about the same amount of memory and CPU. I can't seem to find anything that points to which process is behaving badly. The safest thing would be to figure out which process/pid is writing to STDOUT. Is there any way to do that?
How to find out what process is writing to STDOUT?
Use env to ignore the readonly flags. env SHELLOPTS=xtrace ./yourscriptAlternately, if you have set -x in your parent shell, you can export SHELLOPTS to cause it to be copied into the environment, and thus to the child (thanks to Joe Casadonte for this): set -x # set xtrace flag export SHELLOPTS # copy all options, including xtrace flag, to environment ./yourscript
Bash has a sometimes-useful feature whereby if you turn on the "-x" option (I believe the symbolic name is xtrace), Bash outputs each line of script as it executes it. I know of two ways to enable this behavior:In the script itself, say set -x On the command line, pass the -x option to Bash.Is there any way of turning this option on via environment variables? (In particular, I'm not invoking Bash myself, so I can't pass any options to it, and the script of interest is inside a compressed archive which I don't really feel like rebuilding. If I could set an environment variable, it would presumably be inherited by all child processes...)The manpage says something about BASHOPTS, but when I try it Bash says that's read-only. (Thanks for not mentioning that in the manpage.) Similarly, SHELLOPTS also seems to be read-only. You can select which FD is used with BASH_XTRACEFD. But I still need to turn tracing on in the first place.
Turn on xtrace with environment variable
ltrace -- A library call tracer. It only works on Linux and in a small subset of architectures. Calls to dlopen()ed libraries will not be traced. Further pointers from man page and /etc/ltrace.conf
I know of strace, which is super handy for watching system calls. Is there an equivalent that can trace calls to external libraries? For example, I'm trying to debug some SSLeay misbehavior with a third-party binary that I cannot access the code for (and who's developer is unresponsive). I believe I have nailed down the problem, but I can't test this without seeing exactly how the data being signature-checked is being formatted for sending off to SSLeay.
strace for library calls
You've asked way too much in one question—well, technically not, as I guess "is this understanding correct" can be answered quickly: no. But that's not a useful answer. First, you're right about ata_piix and sd_mod apparently being compiled-in to your kernel. That's a choice you make configuring the kernel—you can omit it, include it, or include it as a module. (Same with ext4). Second, you have assumed writes to be far simpler than they actually are. The basic outline of how a write works is that the filesystem code puts the data to be written in memory, as part of the buffer-cache, and marks it as needs-to-be-written ("dirty"). (Unless there is already too much of that in RAM, in which case it actually is forced to do the write...) Later, various things (such as the bdflush kernel thread) actually flush the dirty pages to disk. This is when you'd see calls through sd, scsi, libata, ata_piix, io schedulers, PCI, etc. While there is very likely DMA involved in that write-out, it's of the data to be transferred, and maybe the command. But disk writes, at least in SATA, are handled by sending commands which basically mean "write sector X with data Y". But it's definitely not handled by memory-mapping the entire disk (consider: you can use disks far larger than 4GiB on 32-bit machines). Caching is handled by the memory management subsystem (not a driver), in conjunction with the filesystem, block layer, etc. tmpfs is special, it is basically entirely cache. Its just special cache that is never discarded or written back (though it can be swapped out). You can find the code in mm/shmem.c and several other places (try ack-grep --cc CONFIG_TMPFS to find them). Basically, writing to disk goes through a good portion of the kernel's subsystems; networking is the only major one I can think of that isn't involved in your example. Properly explaining it requires a book-length effort; I recommend looking for one.
Apologies in advance if this post is a bit dense/messy, but I'm having a hard time formulating it better... Basically, I would like to study what happens upon a hard disk write, and I'd like to know:Is my understanding below correct - and if not, where am I going wrong? Is there a better tool to "capture" log data, about all aspects happening on the PC, during a disk write?In more detail - first, the OS I'm using is: $ uname -a Linux mypc 2.6.38-16-generic #67-Ubuntu SMP Thu Sep 6 18:00:43 UTC 2012 i686 i686 i386 GNU/LinuxSo, I have the following simple (e.g. the usual checks for failure of operations are skipped) user-space C program, wtest.c: #include <stdio.h> #include <fcntl.h> // O_CREAT, O_WRONLY, S_IRUSRint main(void) { char filename[] = "/tmp/wtest.txt"; char buffer[] = "abcd"; int fd; mode_t perms = S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH; fd = open(filename, O_RDWR|O_CREAT, perms); write(fd,buffer,4); close(fd); return 0; }I build this with gcc -g -O0 -o wtest wtest.c. Now, since I'm trying to write to /tmp, I note that it is a directory under the root / - so I check mount: $ mount /dev/sda5 on / type ext4 (rw,errors=remount-ro,commit=0) ... /dev/sda6 on /media/disk1 type ext4 (rw,uhelper=hal,commit=0) /dev/sda7 on /media/disk2 type ext3 (rw,nosuid,nodev,uhelper=udisks,commit=0,commit=0,commit=0,commit=0,commit=0,commit=0) ...So, my root filesystem / is one partition of the /dev/sda device (and I'm using other partitions as "standalone" disks/mounts, too). To find the driver for this device, I use hwinfo: $ hwinfo --disk ... 19: IDE 00.0: 10600 Disk ... SysFS ID: /class/block/sda SysFS BusID: 0:0:0:0 ... Hardware Class: disk Model: "FUJITSU MHY225RB" ... Driver: "ata_piix", "sd" Driver Modules: "ata_piix" Device File: /dev/sda ... Device Number: block 8:0-8:15 ...So, the /dev/sda hard disk is apparently handled by ata_piix (and sd) driver. $ grep 'ata_piix\| sd' <(gunzip </var/log/syslog.2.gz) Jan 20 09:28:31 mypc kernel: [ 1.963846] ata_piix 0000:00:1f.2: version 2.13 Jan 20 09:28:31 mypc kernel: [ 1.963901] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Jan 20 09:28:31 mypc kernel: [ 1.963912] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] Jan 20 09:28:31 mypc kernel: [ 2.116038] ata_piix 0000:00:1f.2: setting latency timer to 64 Jan 20 09:28:31 mypc kernel: [ 2.116817] scsi0 : ata_piix Jan 20 09:28:31 mypc kernel: [ 2.117068] scsi1 : ata_piix Jan 20 09:28:31 mypc kernel: [ 2.529065] sd 0:0:0:0: [sda] 488397168 512-byte logical blocks: (250 GB/232 GiB) Jan 20 09:28:31 mypc kernel: [ 2.529104] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 20 09:28:31 mypc kernel: [ 2.529309] sd 0:0:0:0: [sda] Write Protect is off Jan 20 09:28:31 mypc kernel: [ 2.529319] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 20 09:28:31 mypc kernel: [ 2.529423] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 09:28:31 mypc kernel: [ 2.674783] sda: sda1 sda2 < sda5 sda6 sda7 sda8 sda9 sda10 > Jan 20 09:28:31 mypc kernel: [ 2.676075] sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 09:28:31 mypc kernel: [ 4.145312] sd 2:0:0:0: Attached scsi generic sg1 type 0 Jan 20 09:28:31 mypc kernel: [ 4.150596] sd 2:0:0:0: [sdb] Attached SCSI removable diskI have to pull from older syslog as I suspend a lot, but the above seems like the proper snippet from the syslog at boot time, where the ata_piix (and sd) driver kicks in for the first time. My first point of confusion is that I cannot otherwise observe the ata_piix or sd drivers: $ lsmod | grep 'ata_piix\| sd' $ $ modinfo sd ERROR: modinfo: could not find module sd $ modinfo ata_piix ERROR: modinfo: could not find module ata_piixSo my first question is - why cannot I observe the ata_piix module here, only in boot-time logs? Is it because ata_piix (and sd) are built as built-in drivers in the (monolithic) kernel, as opposed to being built as (loadable) .ko kernel modules? Right - so now, I'm trying to observe what happens upon running the program with ftrace Linux built-in function tracer. sudo bash -c ' KDBGPATH="/sys/kernel/debug/tracing" echo function_graph > $KDBGPATH/current_tracer echo funcgraph-abstime > $KDBGPATH/trace_options echo funcgraph-proc > $KDBGPATH/trace_options echo 0 > $KDBGPATH/tracing_on echo > $KDBGPATH/trace echo 1 > $KDBGPATH/tracing_on ; ./wtest ; echo 0 > $KDBGPATH/tracing_on cat $KDBGPATH/trace > wtest.ftrace '... and here is a snippet of the ftrace log concerning the write: 4604.352690 | 0) wtest-31632 | | sys_write() { 4604.352690 | 0) wtest-31632 | 0.750 us | fget_light(); 4604.352692 | 0) wtest-31632 | | vfs_write() { 4604.352693 | 0) wtest-31632 | | rw_verify_area() { 4604.352693 | 0) wtest-31632 | | security_file_permission() { 4604.352694 | 0) wtest-31632 | | apparmor_file_permission() { 4604.352695 | 0) wtest-31632 | 0.811 us | common_file_perm(); 4604.352696 | 0) wtest-31632 | 2.198 us | } 4604.352697 | 0) wtest-31632 | 3.573 us | } 4604.352697 | 0) wtest-31632 | 4.979 us | } 4604.352698 | 0) wtest-31632 | | do_sync_write() { 4604.352699 | 0) wtest-31632 | | ext4_file_write() { 4604.352700 | 0) wtest-31632 | | generic_file_aio_write() { 4604.352701 | 0) wtest-31632 | | mutex_lock() { 4604.352701 | 0) wtest-31632 | 0.666 us | _cond_resched(); 4604.352703 | 0) wtest-31632 | 1.994 us | } 4604.352704 | 0) wtest-31632 | | __generic_file_aio_write() { ... 4604.352728 | 0) wtest-31632 | | file_update_time() { ... 4604.352732 | 0) wtest-31632 | 0.756 us | mnt_want_write_file(); 4604.352734 | 0) wtest-31632 | | __mark_inode_dirty() { ... 4604.352750 | 0) wtest-31632 | | ext4_mark_inode_dirty() { 4604.352750 | 0) wtest-31632 | 0.679 us | _cond_resched(); 4604.352752 | 0) wtest-31632 | | ext4_reserve_inode_write() { ... 4604.352777 | 0) wtest-31632 | | __ext4_journal_get_write_access() { ... 4604.352795 | 0) wtest-31632 | | ext4_mark_iloc_dirty() { ... 4604.352806 | 0) wtest-31632 | | __ext4_journal_stop() { ... 4604.352821 | 0) wtest-31632 | 0.684 us | mnt_drop_write(); 4604.352822 | 0) wtest-31632 | + 93.541 us | } 4604.352823 | 0) wtest-31632 | | generic_file_buffered_write() { 4604.352824 | 0) wtest-31632 | 0.654 us | iov_iter_advance(); 4604.352825 | 0) wtest-31632 | | generic_perform_write() { 4604.352826 | 0) wtest-31632 | 0.709 us | iov_iter_fault_in_readable(); 4604.352828 | 0) wtest-31632 | | ext4_da_write_begin() { 4604.352829 | 0) wtest-31632 | | ext4_journal_start_sb() { ... 4604.352847 | 0) wtest-31632 | 1.453 us | __block_write_begin(); 4604.352849 | 0) wtest-31632 | + 21.128 us | } 4604.352849 | 0) wtest-31632 | | iov_iter_copy_from_user_atomic() { 4604.352850 | 0) wtest-31632 | | __kmap_atomic() { ... 4604.352863 | 0) wtest-31632 | 0.672 us | mark_page_accessed(); 4604.352864 | 0) wtest-31632 | | ext4_da_write_end() { 4604.352865 | 0) wtest-31632 | | generic_write_end() { 4604.352866 | 0) wtest-31632 | | block_write_end() { ... 4604.352893 | 0) wtest-31632 | | __ext4_journal_stop() { ... 4604.352909 | 0) wtest-31632 | 0.655 us | mutex_unlock(); 4604.352911 | 0) wtest-31632 | 0.727 us | generic_write_sync(); 4604.352912 | 0) wtest-31632 | ! 212.259 us | } 4604.352913 | 0) wtest-31632 | ! 213.845 us | } 4604.352914 | 0) wtest-31632 | ! 215.286 us | } 4604.352914 | 0) wtest-31632 | 0.685 us | __fsnotify_parent(); 4604.352916 | 0) wtest-31632 | | fsnotify() { 4604.352916 | 0) wtest-31632 | 0.907 us | __srcu_read_lock(); 4604.352918 | 0) wtest-31632 | 0.685 us | __srcu_read_unlock(); 4604.352920 | 0) wtest-31632 | 3.958 us | } 4604.352920 | 0) wtest-31632 | ! 228.409 us | } 4604.352921 | 0) wtest-31632 | ! 231.334 us | }This is my second point of confusion - I can observe the user-space write() resulted with a kernel-space sys_write(), as expected; and within the sys_write(), I observe security-related functions (e.g. apparmor_file_permission()), "generic" write functions (e.g. generic_file_aio_write()), ext4 filesystem related functions (e.g. ext4_journal_start_sb()) - but I do not observe anything related to ata_piix (or sd) drivers ?! The page Tracing and Profiling - Yocto Project suggests using the blk tracer in ftrace to get more information about block device operation, but it reports nothing for me with this example. Also, Linux Filesystem Drivers - Annon Inglorion (tutorfs) suggests that filesystems are (can?) also (be) implemented as kernel modules/drivers, and I'm guessing that is the case for ext4 as well. Finally, I could have sworn that I have earlier observed the driver name in square brackets next to the function shown by the function_graph tracer, but I guess I had mixed things up - it can probably appear like that in stack (back)traces, but not in the function graph. Furthermore, I can inspect /proc/kallsyms: $ grep 'piix\| sd\|psmouse' /proc/kallsyms ... 00000000 d sd_ctl_dir 00000000 d sd_ctl_root 00000000 d sdev_class 00000000 d sdev_attr_queue_depth_rw 00000000 d sdev_attr_queue_ramp_up_period 00000000 d sdev_attr_queue_type_rw 00000000 d sd_disk_class ... 00000000 t piix_init_sata_map 00000000 t piix_init_sidpr 00000000 t piix_init_one 00000000 t pci_fixup_piix4_acpi ... 00000000 t psmouse_show_int_attr [psmouse] 00000000 t psmouse_protocol_by_type [psmouse] 00000000 r psmouse_protocols [psmouse] 00000000 t psmouse_get_maxproto [psmouse] ...... and checking with source Linux/drivers/ata/ata_piix.c, confirm that e.g. piix_init_sata_map is indeed a function in ata_piix. Which should probably tell me that: modules that are compiled in the kernel (so they become a part of the monolithic kernel) "lose" the information about which module they come from; however, the loadable modules which are built as separate .ko kernel objects, preserve that information (e.g. [psmouse] shown above in square brackets). Thus, also ftrace could only show "originating module" information, only for those functions coming from loadable kernel modules. Is this correct? The above taken into consideration, this is the understanding that I have of the process currently:At boot time, the ata_piix driver establishes a DMA (?) memory mapping between /dev/sda and the hard diskbecause of this, all future accesses to /dev/sda via ata_piix will be transparent to the kernel (that is, not traceable) - since all the kernel would see, are just reads/writes to memory locations (not necessarily calls to specific traceable kernel functions), which are not reported by function_graph tracerAt boot time, the sd driver will furthermore "parse" the partitions of /dev/sda, make them available, and possibly handle the memory mappings between partitions <-> disk deviceagain, this should make the access operations via sd transparent to the kernelSince both ata_piix and sd are compiled in-kernel, even if some of their functions do end up being captured by ftrace, we cannot get an information of which module those functions would come from (apart from "manual" correlation with source files) Later on, mount establishes a relationship/binding between a partition, and the corresponding filesystem driver (in this case ext4)from this point on, all of the accesses to the mounted filesystem would be handled by ext4 functions - which are traceable by the kernel; but as ext4 is compiled in-kernel, the tracer cannot give us the originating module informationSo, the observed "generic" writes, called via ext4 functions, would ultimately access memory locations, whose mapping is established by ata_piix - but other than that, ata_piix doesn't interfere directly with data transfers (it being probably handled by DMA (outside of the processor(s), and thus transparent to it).Is this understanding correct? Some related subquestions:In my setup above, I can identify a PCI device driver (ata_piix) and a filesystem driver (ext4); but are there character or block drivers used somewhere on the "write" execution path, and if so, which are they? Which of those drivers would handle caching (so unnecessary disk operations are skipped or optimized?) I know from before that /dev/shm is a filesystem in RAM; mount | grep shm for me reports: none on /dev/shm type tmpfs (rw,nosuid,nodev). Does that mean that - in contrast to /dev/sda - the shm filesystem simply lacks the (DMA) mapping from "its own" adrresses to bus addresses towards a device; and thus all accesses via the tmpfs filesystem driver end up in actual RAM?
Observing a hard-disk write in kernel space (with drivers/modules)
The fork() and vfork() wrappers in glibc are implemented via the clone() system call. To better understand the relationship between fork() and clone(), we must consider the relationship between processes and threads in Linux. Traditionally, fork() would duplicate all the resources owned by the parent process and assign the copy to the child process. This approach incurs considerable overhead, which all might be for nothing if the child immediately calls exec(). In Linux, fork() utilizes copy-on-write pages to delay or altogether avoid copying the data that can be shared between the parent and child processes. Thus, the only overhead that is incurred during a normal fork() is the copying of the parent's page tables and the assignment of a unique process descriptor struct, task_struct, for the child. Linux also takes an exceptional approach to threads. In Linux, threads are merely ordinary processes which happen to share some resources with other processes. This is a radically different approach to threads compared to other operating systems such as Windows or Solaris, where processes and threads are entirely different kinds of beasts. In Linux, each thread has an ordinary task_struct of its own that just happens to be setup in such a way that it shares certain resources, such as an address space, with the parent process. The flags parameter of the clone() system call includes a set of flags which indicate which resources, if any, the parent and child processes should share. Processes and threads are both created via clone(), the only difference is the set of flags that is passed to clone(). A normal fork() could be implemented as: clone(SIGCHLD, 0);This creates a task which does not share any resources with its parent, and is set to send the SIGCHLD termination signal to the parent when it exits. In contrast, a task which shares the address space, filesystem resources, file descriptors and signal handlers with the parent, in other words a thread, could be created with: clone(CLONE_VM | CLONE_FS |CLONE_FILES | CLONE_SIGHAND, 0);vfork() in turn is implemented via a separate CLONE_VFORK flag, which will cause the parent process to sleep until the child process wakes it via a signal. The child will be the sole thread of execution in the parent's namespace, until it calls exec() or exits. The child is not allowed to write to the memory. The corresponding clone() call could be as follows: clone(CLONE_VFORK |CLONE_VM | SIGCHLD, 0)The implementation of sys_clone() is architecture specific, but the bulk of the work happens in kernel_clone() defined in kernel/fork.c. This function calls the static copy_process(), which creates a new process as a copy of the parent, but does not start it yet. copy_process() copies the registers, assigns a PID to the new task, and either duplicates or shares appropriate parts of the process environment as specified by the clone flags. When copy_process() returns, kernel_clone() will wake the newly created process and schedule it to run. References kernel/fork.c in Linux v5.19-rc5, 2022-07-03. See line 2606 for kernel_clone(), and line 2727 onward for the definitions of the syscalls fork(), vfork(), clone(), and clone3(), which all more or less just wrap kernel_clone().
When ltrace is used for tracing the system calls, I could see that fork() uses sys_clone() rather than sys_fork(). But I couldn't find the linux source where it is defined. My program is: #include<stdio.h>main() { int pid,i=0,j=0; pid=fork(); if(pid==0) printf("\nI am child\n"); else printf("\nI am parent\n");}And ltrace output is: SYS_brk(NULL) = 0x019d0000 SYS_access("/etc/ld.so.nohwcap", 00) = -2 SYS_mmap(0, 8192, 3, 34, 0xffffffff) = 0x7fe3cf84f000 SYS_access("/etc/ld.so.preload", 04) = -2 SYS_open("/etc/ld.so.cache", 0, 01) = 3 SYS_fstat(3, 0x7fff47007890) = 0 SYS_mmap(0, 103967, 1, 2, 3) = 0x7fe3cf835000 SYS_close(3) = 0 SYS_access("/etc/ld.so.nohwcap", 00) = -2 SYS_open("/lib/x86_64-linux-gnu/libc.so.6", 0, 00) = 3 SYS_read(3, "\177ELF\002\001\001", 832) = 832 SYS_fstat(3, 0x7fff470078e0) = 0 SYS_mmap(0, 0x389858, 5, 2050, 3) = 0x7fe3cf2a8000 SYS_mprotect(0x7fe3cf428000, 2097152, 0) = 0 SYS_mmap(0x7fe3cf628000, 20480, 3, 2066, 3) = 0x7fe3cf628000 SYS_mmap(0x7fe3cf62d000, 18520, 3, 50, 0xffffffff) = 0x7fe3cf62d000 SYS_close(3) = 0 SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf834000 SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf833000 SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf832000 SYS_arch_prctl(4098, 0x7fe3cf833700, 0x7fe3cf832000, 34, 0xffffffff) = 0 SYS_mprotect(0x7fe3cf628000, 16384, 1) = 0 SYS_mprotect(0x7fe3cf851000, 4096, 1) = 0 SYS_munmap(0x7fe3cf835000, 103967) = 0 __libc_start_main(0x40054c, 1, 0x7fff47008298, 0x4005a0, 0x400590 <unfinished ...> fork( <unfinished ...> SYS_clone(0x1200011, 0, 0, 0x7fe3cf8339d0, 0) = 5967 <... fork resumed> ) = 5967 puts("\nI am parent" <unfinished ...> SYS_fstat(1, 0x7fff47008060) = 0 SYS_mmap(0, 4096, 3, 34, 0xffffffff ) = 0x7fe3cf84e000 I am child SYS_write(1, "\n", 1 ) = 1 SYS_write(1, "I am parent\n", 12) = -512 --- SIGCHLD (Child exited) --- SYS_write(1, "I am parent\n", 12I am parent ) = 12 <... puts resumed> ) = 13 SYS_exit_group(13 <no return ...> +++ exited (status 13) +++
Which file in kernel specifies fork(), vfork()... to use sys_clone() system call
DTrace is able to report on vfs information in FreeBSD (as well as a raft of other probes). DTrace is enabled by default in the 10 kernel so all you need to do is load the module then run the dtrace script. Load the DTrace module kldload dtraceallGet the vfssnoop.d script from the FreeBSD forums. The whole thread is a treasure trove for disk monitoring. Run it: ./vfssnoop.dWatch the output for what is accessed: # ./vfssnoop.d cc1: warning: is shorter than expected TIMESTAMP UID PID PROCESS CALL SIZE PATH/FILE 1555479476691083 0 1225 nfsd vop_getattr - /share/netboot 1555479478601010 0 1225 nfsd vop_inactive - /share/netboot 1555479482457241 0 1225 nfsd vop_getattr - /share/wpad.dat 1555480557262388 0 1432 cron vop_getattr - /var/cron/tabs 1555480557302178 0 1432 cron vop_inactive - /var/cron/tabs 1555480557336414 0 1432 cron vop_inactive - /etc 1555480557346224 0 1432 cron vop_getattr - /etc/crontab
Is there any way to find some process which is periodically writing to disk (according to hdd led) on FreeBSD 10 with ZFS (maybe turn ZFS into verbose logging mode)? lsof and other instantly aggregating statistics utilities seems not able to catch anything due to a short time of a moment of a disk access.
FreeBSD 10 trace disk activity
When instructed to echo commands as they are executed ("execution trace"), both bash and ksh add single quotes around any word with one of several types of meta-characters1. The meta-characters could have gotten into the word in a variety of ways. The word (or part of it) could have been quoted with single or double quotes, the characters could have been escaped with a \, or they remained as the result of a failed filename matching attempt. In all cases, the execution trace will contain single-quoted words, for example: $ set -x $ echo foo\;bar + echo 'foo;bar'This is just an artifact of the way the shells implement the execution trace; it doesn't alter the way the arguments are ultimately passed to the command. The quotes are added, printed, and discarded. Here is the relevant part of the bash source code, print_cmd.c: /* A function to print the words of a simple command when set -x is on. */ void xtrace_print_word_list (list, xtflags) ... { ... for (w = list; w; w = w->next) { t = w->word->word; ... else if (sh_contains_shell_metas (t)) { x = sh_single_quote (t); fprintf (xtrace_fp, "%s%s", x, w->next ? " " : ""); free (x); }As to why the authors chose to do this, the code there doesn't say. But here's some similar code in variables.c, and it comes with a comment: /* Print the value cell of VAR, a shell variable. Do not print the name, nor leading/trailing newline. If QUOTE is non-zero, and the value contains shell metacharacters, quote the value in such a way that it can be read back in. */ void print_var_value (var, quote) ... { ... else if (quote && sh_contains_shell_metas (value_cell (var))) { t = sh_single_quote (value_cell (var)); printf ("%s", t); free (t); }So possibly it's done so that it's easier to copy the command lines from the output of the execution trace and run them again.1The characters that would be quoted are: IFS whitespace (space, tab, newline), quoting chars (single and double quotes, backslash), shell meta-characters (|, &, ;, (, ), <, >), reserved words (!, {, }), globbing characters (*, [, ?, ], ^), expansion characters ($, backtick), possibly tilde, possibly #. See towards the end of lib/sh/shquote.c.
I was exploring the tracing of commands using set -x (set +x to unset) in bash:Print a trace of simple commands, for commands, case commands, select commands, and arithmetic for commands and their arguments or associated word lists after they are expanded and before they are executed. The value of the PS4 variable is expanded and the resultant value is printed before the command and its expanded arguments.Now consider the following, tracing the use of the the bash builtin echo \[-neE\] \[arg …\] command with and without quotes: # set -x # what I typed # echo 'love' # ... + echo love <--(1) the trace love # the output# echo love? # note the input contains no quote whatsoever + echo 'love?' <--(2) note the trace contains quotes after returning word love? # i.e. failed to find any file # echo 'love?' # note the input contains single quotes + echo 'love?' <--(3) traces like (2) love?# touch loveu # we create a file that matches the love? pattern + touch loveu# echo love? # of course now, the pattern matches the created file now + echo loveu <--(4) indeed it finds it and expands to name loveu # the name is echoedSo ? is indeed interpreted in this case as a special character used for pattern matching one single character in pathname expansion. Sure enough, once a file matching the pattern was created in the current directory, the match occurred and the name of the file was printed. Of course this behavior is documented:If no matching file names are found, and the shell option nullglob is disabled, the word is left unchanged.But the thing is that the word in (2) is unquoted love? not 'love?'. The trace shows the state before command execution but after expansion, and as we're seeing there is pathname expansion because of ? and there were no matches in the first case(2) we used the special character. So the single quotes appear in that case, just as when we use single quotes(3) ourselves with the same string? Whereas in the other cases there was either a literal or the match was found and accordingly "replaced" the pattern in the command. This seems to be what is meant in the manual section on quote removal right after expansion:After the preceding expansions, all unquoted occurrences of the characters ‘\’, ‘'’, and ‘"’ that did not result from one of the above expansions are removed. (my italics)So here(2) we have unquoted occurrences of ' which result from the prior expansion. I did not put them there; bash did, and now they're not removed - and we're just before the execution of the command.Similar illustration with for Consider this list used in a for name [ [in [words …] ] ; ] do commands; done loop1 , with no matching file: # for i in love love? 'love?'; do echo $i; done + for i in love 'love?' ''\''love?'\''' + echo love love + for i in love 'love?' ''\''love?'\''' + echo 'love?' love? + for i in love 'love?' ''\''love?'\''' + echo 'love?' love?So the echo command behavior is quite the same but in the case of the items in the for construct, it seems like it's trying to... escape itself quoting my quotes?? I'm uncertain...QuestionsWhy is an unquoted failed pathname expansion pattern denoted with single quotes in the context(2); expansion is completed anyway and we're going to execute? Again, we've completed expansion already and the pattern failed - nothing should have to expand anymore. I guess what I'm asking is why do we care at this point - the point we're at is just before 3.7.2-4 in the bash manual. Why isn't this left "as is" and expansion is simply turned off for command execution i.e. something like set -f? (What is the for loop doing with my single quoted item in the list?)1. When using such a word list construct with for, it's really a list of items and the values are for convenience really as I find t="0"; for i in 0 0 0 0; do let t++; echo "yes, this is really $t times"; done quite convincing.
Why does bash add single quotes to unquoted failed pathname expansions in a command before executing it?
Using gcov: $ gcc -O0 --coverage square.c $ ./a.out $ gcov -i square.c $ awk -F '[,:]' '$1 == "function" && $3 > 0 {print $3, $4}' square.c.gcov 1 square 1 main(where the number is the number of times the function was called (we skip the ones that are never called with $3 > 0 in the awk part)). That's typically used for code coverage (how much of the code is being tested). You could also use the gprof code profiling tool (typically used to figure out how much time is spent in various areas of the code): $ gcc -O0 -pg square.c $ ./a.out $ gprof -b -P Call graphgranularity: each sample hit covers 2 byte(s) no time propagatedindex % time self children called name 0.00 0.00 1/1 main [7] [1] 0.0 0.00 0.00 1 square [1] -----------------------------------------------Index by function name [1] square
I am looking for such kind of a tool on either Unix/Linux platform which can achieve:I have the source files and I compiled the application myself (the source code is in C, although I don't really think it matters here) I want to run this application while every function calls are printed/logged to a stdout/fileFor example: #include <stdio.h> int square(int x) { return x*x; } int main(void) { square(2); }And when I run this program it will print outmain squareI understand that gdb can do this to some extent, or valgrind but they all do not do exactly what I want. I am just wondering if such a tool exist? Thanks.
Tool to print out functions being called during run time?
The -x flag isn't strictly "verbose", it's:The shell shall write to standard error a trace for each command after it expands the command and before it executes it.++ means this line of trace is coming from the shell's own internal processing while it thinks about your prompt. It's probably something that happens in your PROMPT_COMMAND: in that case, if you run: PROMPT_COMMAND= set -xthen you shouldn't get any more extra output. It's possible you have other configuration causing it as well — Bash has a lot of prompt setup — and in that case bash -norc should avoid it entirely.That said, this is essentially intended behaviour: -x is really meant for debugging shell scripts, rather than use in an interactive shell. It really is meant to print out every command that it runs, and that's what it's doing here - there's an extra command that runs with every prompt printed.
Using set -x causes my shell to print unrelated stuff. For example: $ set -x ++ printf $'%157s\r' $ echo 'hello' + echo hello hello ++ printf $'%157s\r'The lines starting with '++' are extra. How can I remove those extra lines?
set verbose prints unrelated extra lines
There seems to be nothing wrong. The process you are looking at (ssh) has simply nothing to do at the moment you are taking it's process stat. As long as there is no output from the remotely started command, the "select" blocks, and the process is sent to sleep.
When I run command like iostat -dkx 2 2 via ssh, I get the expected result, but the processes on the local computer are saying alive in status "interruptible sleep". Why is this happening? Is there a way to find out the reason of that behavior? full command: $ ssh -o ConnectTimeout=4 -o ChallengeResponseAuthentication=no -o PasswordAuthentication=no <user>@host> iostat -dkx 2 2ps output: $ ps aux | grep 11893 && ps aux | grep PID USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000 10273 0.0 0.0 103280 904 pts/0 S+ 12:09 0:00 grep PID 1000 11893 0.0 0.0 158732 3892 ? S Feb17 0:00 ssh -o ConnectTimeout=4 -o ChallengeResponseAuthentication=no -o PasswordAuthentication=no <user>@<host> iostat -dkx 2 2 1000 10285 0.0 0.0 103280 904 pts/0 S+ 12:09 0:00 grep 11893strace: $ strace -p 11893 Process 11893 attached - interrupt to quit select(8, [5], [], NULL, NULL^C <unfinished ...>wchan: $ cat /proc/11893/wchan poll_schedule_timeoutstacktrace: $ cat /proc/11893/stack [] poll_schedule_timeout+0x39/0x60 [] do_select+0x6bb/0x7c0 [] core_sys_select+0x18a/0x2c0 [] sys_select+0x47/0x110 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff
How to find out the reason why ssh processes are hanging?
You can get a single trace with: strace -f sh -c 'command1 | command2'The "-f" will "follow" fork calls into the child processes (so you'll also get any sub-commands invoked by command1 or command2, which may or may not be what you want.) Also, you'll get a trace of the sh process too. If you want each processes output in a separate file, the "-ff" option will trace subprocesses and append the PID to the "-o" filename, as in: strace -ff -o trace sh -c 'command1 | command2'This should create separate trace.<PID> files for each forked child.
I have a pipeline like this: command1 | command2Is there a way to trace both commands simultaneously?
Is there a way to use strace to trace different parts of a command pipeline?
kernel BUG at fs/ext4/inode.c:2118! invalid opcode: 0000 [#1] SMPAppears to be an issue with the ext4 driver in your kernel. Process mythbackend (pid: 27841, threadinfo ffff88004262a000, task ffff88007fb83330)mythbackend is triggering it. [<ffffffff811731df>] mpage_da_map_and_submit+0x2c6/0x2dc [<ffffffff8117390a>] ext4_da_writepages+0x2d4/0x465this is the frame stack; what syscalls were being made that led to this.
I’m running a linux server at home which is mostly a file and e-mail server and a digital video recorder. All the data goes on an ext4 partition on a software raid-6. Every now and then (sometimes twice a day, sometimes twice a month) the whole server locks up. Sometimes I have a kernel report in the syslog which I cannot understand: ------------[ cut here ]------------ kernel BUG at fs/ext4/inode.c:2118! invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/virtual/net/ppp0/uevent CPU 0 Modules linked in: ppp_async crc_ccitt nvidia(P) fcpci(P) scsi_wait_scanPid: 27841, comm: mythbackend Tainted: P 2.6.39-gentoo-r3 #2 System manufacturer System Product Name/M2N-E RIP: 0010:[<ffffffff8116f580>] [<ffffffff8116f580>] mpage_da_submit_io+0x268/0x3bf RSP: 0018:ffff88004262bba8 EFLAGS: 00010286 RAX: ffffea000048b650 RBX: ffffea000051d118 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff880000826890 RDI: 0000000000005d38 RBP: ffff88004262bcf8 R08: 000000000d654538 R09: 0100000000002820 R10: 0000000000005d0d R11: 0000000000000000 R12: ffff88004262bde8 R13: ffff88004262bd28 R14: ffff88005ef46150 R15: 0000000000005d37 FS: 00007fbeb053f700(0000) GS:ffff88007fc00000(0000) knlGS:00000000f74aa8e0 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00007fdcb7a36000 CR3: 000000006b721000 CR4: 00000000000006f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process mythbackend (pid: 27841, threadinfo ffff88004262a000, task ffff88007fb83330) Stack: ffff88007b193b88 ffff88004262bc98 ffff88004741c138 000004ac00001424 ffff88004262bc28 0000000000005d70 ffff88005ef46298 00000000811a337f 0000000000005d70 000000010000000e ffff88004262bc30 0000100000000000 Call Trace: [<ffffffff811731df>] mpage_da_map_and_submit+0x2c6/0x2dc [<ffffffff8117390a>] ext4_da_writepages+0x2d4/0x465 [<ffffffff810aafd6>] do_writepages+0x1c/0x26 [<ffffffff810a3bc0>] __filemap_fdatawrite_range+0x4b/0x4d [<ffffffff810a3bea>] filemap_write_and_wait_range+0x28/0x51 [<ffffffff810fcba1>] vfs_fsync_range+0x30/0x75 [<ffffffff810fcc3b>] vfs_fsync+0x17/0x19 [<ffffffff810fcc66>] do_fsync+0x29/0x3e [<ffffffff810fcc89>] sys_fdatasync+0xe/0x12 [<ffffffff8155f4fb>] system_call_fastpath+0x16/0x1b Code: c1 00 02 00 00 74 09 f0 80 60 01 fd 4c 89 40 18 4c 8b 08 41 f7 c1 00 10 00 00 75 09 4c 8b 08 41 80 e1 20 74 0a 4c 39 40 18 74 04 <0f> 0b eb fe 41 f6 45 12 80 74 05 f0 80 48 02 80 f0 80 60 01 ef RIP [<ffffffff8116f580>] mpage_da_submit_io+0x268/0x3bf RSP <ffff88004262bba8> ---[ end trace c228cd85b8ef2f99 ]---
What does this Linux kernel trace mean?
You can use Ftrace. Ftrace is a tracing utility built directly into the Linux kernel. Checkout here for usage.
We have strace to trace programs, ltrace for dynamic libraries, and others. Is there any equivalent for kernel objects, as in tracing just like strace but for a specific .ko? Thanks
Trace kernel object calls
You found the correct symbol yourself: __x64_sys_execve, you want to use the same when running the command: $ sudo trace-bpfcc '__x64_sys_execve "%s", arg1'There is no function called simply __sys_execve in your kernel and you have to match the exact symbol for BCC to find the relevant function.
I am following bcc Tutorial and trying to execute the trace-bpfcc command: sudo trace-bpfcc 'sys_execve "%s", arg1' The command fails with an error: cannot attach kprobe, probe entry may not exist Failed to attach BPF program b'probe_sys_execve_1' to kprobe b'sys_execve' While searching the web, I found that such an error can occur if the symbol __x64_sys_execve is missing in /proc/kallsyms, but I have one there. I don't have any kernel development experience at all, what do I need to do to fix this issue? My distro is Ubuntu 20
'cannot attach kprobe, probe entry may not exist' when execute trace-bpfcc
It's impossible to attach latrace to a running process. The article "Using latrace" in the MeeGo 1.2 developer documentation makes it clear:You must restart the process with latrace for the tracing to work.Someone should send a feature request to the latrace mailing list.
I want to trace a running multi-threaded process's library calls. As of 5 November 2012, there's only one way to get ltrace to fully support tracing multi-threaded processes: you must check out and compile an ltrace 0.7.0 prerelease. So I've decided to use latrace instead. But, looking at the latrace documentation, I didn't see an option to specify a PID to attach to. Which brings me to my question: Is there any way to attach latrace to an already-running process?
Is there any way to attach latrace to an already-running process?
Apparently this issue has nothing to do with Android. We have tested with our custom Linux version and we have still the same problem: FTrace produces milliseconds precision while other tools are able to produce microseconds precision. Maybe a FTrace module version problem? Regards,
I'm trying to generate FTrace files inside an Android environment, using: root@adroid:# echo 1 > /sys/kernel/debug/tracing/events/sched/sched_switch/enable root@adroid:# echo 1 > /sys/kernel/debug/tracing/tracing_on root@adroid:# cat /sys/kernel/debug/tracing/trace > mytracefile.txt root@adroid:# echo 0 > /sys/kernel/debug/tracing/tracing_on root@adroid:# echo 0 > /sys/kernel/debug/tracing/events/sched/sched_switch/enableThe problem is that in mytracefile.txt the precision of time stamp is in milliseconds: <idle>-0 [000] d.h7 14186.690000: sched_wakeup: comm=tfm_b6bcf800 pid=1714 prio=35 success=1 target_cpu=000 <idle>-0 [000] d..3 14186.690000: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=tfm_b6bcf800 next_pid=1714 next_prio=35 tfm_b6bcf800-1714 [000] d..3 14186.690000: sched_switch: prev_comm=tfm_b6bcf800 prev_pid=1714 prev_prio=35 prev_state=D|W ==> next_comm=swapper/0 next_pid=0 next_prio=120 <idle>-0 [001] d.h3 14186.690000: sched_wakeup: comm=Player Aud Mixe pid=146 prio=35 success=1 target_cpu=001 <idle>-0 [001] d..3 14186.690000: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=Player Aud Mixe next_pid=146 next_prio=35 Player Aud Mixe-146 [001] d..3 14186.690000: sched_switch: prev_comm=Player Aud Mixe prev_pid=146 prev_prio=35 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120 <idle>-0 [001] d.h3 14186.690000: sched_wakeup: comm=Player Aud Mixe pid=146 prio=35 success=1 target_cpu=001 <idle>-0 [001] d..3 14186.690000: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=Player Aud Mixe next_pid=146 next_prio=35 Player Aud Mixe-146 [001] d..3 14186.690000: sched_switch: prev_comm=Player Aud Mixe prev_pid=146 prev_prio=35 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120 <idle>-0 [001] d.h3 14186.700000: sched_wakeup: comm=Player Aud Mixe pid=146 prio=35 success=1 target_cpu=001 <idle>-0 [001] d..3 14186.700000: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=Player Aud Mixe next_pid=146 next_prio=35Normally, it should be in the order of microseconds, like 14186.691234 instead of just 14186.690000. I have already tested all the other trace_clock options that I have (local, global and counter) but the results are always the same. Changing that parameter doesn't change anything in the output file. I followed carefully FTrace documentation but I don't know what else to change in configuration. Searching over the internet I had only been able to find this bug report but it was not solved: http://lists.linaro.org/pipermail/linaro-dev/2011-February/002720.html Any advice? Are there additional modules in the kernel that I have to install?Android Version: 4.2.2 Kernel Linux Version: 3.4.7dmesg output is also showing milliseconds precision: <4>[ 38.130000] oom_adj 0 => oom_score_adj 0 <4>[ 38.130000] oom_adj 1 => oom_score_adj 58 <4>[ 38.140000] oom_adj 2 => oom_score_adj 117 <4>[ 38.140000] oom_adj 4 => oom_score_adj 235 <4>[ 38.150000] oom_adj 9 => oom_score_adj 529 <4>[ 38.150000] oom_adj 15 => oom_score_adj 1000Update: We are building a graphic parser for android ftrace files (including atrace traces). That's why it is very important to have as much precision as possible. Using the same device and other software tools we can successfully have microseconds precision. So, now we're planning to edit ftrace.c source code to modify the way timestamp is generated to this one: static inline uint64_t my_custom_jiffies(void) { struct timeval tv; gettimeofday(&tv, NULL); return tv.tv_sec*1000000 + tv.tv_usec; }I personally think that is not a very good solution/architecture/implementation because this should be possible without modifying the source code... but it's the only idea that we have at this moment. What do you think ?
How to increase ftrace precision to microseconds?
I haven't checked that you will get what you want with it, but the first thing I'd try is the audit subsystem. Make sure that the auditd daemon is started, then use auditctl to configure what you want to log. For ordinary filesystem accesses, you would do auditctl -w /path/to/directory auditctl -a exit,always -S fnctl -S open -S flock -F dir=/path/to/directoryThe -S option can be used to restrict the logging to specific syscalls. The logs appear in /var/log/audit/audit.log on Debian, and probably on Fedora as well. If you do know which process(es) may lock the file, then consider running strace on these processes (and only looking at the file-related system calls, or further restricting to specific syscalls). strace -s9999 -o foo.strace -e file foo
I'm quite new to Linux and I have not really a clue on how to do this. I've got a directory and I'd like to monitor (output to shell) when a file inside that directory get's a file lock and when it is released. It would be okay to know as well other things, like when a file is created and similar, but I'm mainly interested about the locks. I don't need to know which process does the lock, it's more about the order in which this happens. I'm pretty sure some tool for this exists (I already installed dtrace but after --help I decided to ask a question here). Any pointers warm-heartedly appreciated. I'm running a fedora 14 box if that matters.
How to trace file locks (per directory)
From this Stack Overflow answer:[10991.880408] EIP: 0060:[<c06969d4>] EFLAGS: 00210246 CPU: 0 [10991.880411] EIP is at iret_exc+0x7d0/0xa59That gives you the faulting instruction pointer, both directly and in symbol+offset form. The part after the slash is the size of the function.Assuming a similar format, ffffffff80942810 is the address, __warn is the symbol, 0x160 is the offset from that symbol, and 0x190 is the size of the __warn function.
I'm learning linux kernel debugging and dmesg is the tool that output kernel debug log: ... [ 2.988000] Trace: [ 2.988000] [<ffffffff80942810>] __warn+0x160/0x190 [ 2.988000] [<ffffffff8111ae9c>] dwc3_probe+0xc1c/0x1e60 [ 2.988000] [<ffffffff8111ae9c>] dwc3_probe+0xc1c/0x1e60 [ 2.988000] [<ffffffff80fe4434>] platform_drv_probe+0x34/0x90 ...Question 1: IIUC, __warn is kernel function, but what's the meaning of 0x160/0x190?
How to understand Trace from dmesg?
You could either set up auditing or use DTrace. There are various examples how to use it to monitor file access on the interwebs, for example here.
nsswitch.conf file always corrupts somehow. I do restore from my backup every time it corrupt. When this file corrupts, its time stamp is changing. Now I wonder if there is a way to track that file ? I'd like to know which process or maybe user cause that corruption. Is there a way to do that ?
How to track file change
I ran a Slackware system for 8 years using ReiserFS v3 as the main filesystem. I don't think I ever had a problem until the disk started having hardware problems. I looked at your messages, and although the problem appears to come from filesystem code, it also looks like ext3 messages are mixed in there. Personally, I'd suspect a disk going bad, especially when you say "I've been running this system for years". Disks are complicated mechanically and electronically. They do go bad, in strange and unpredictable ways.
I just had a weird system freeze that I resolved using alt-sysrq-e. Here's my dmesg: https://gist.github.com/1609263 I was wondering whether it might be some kind of deadlock in the reiserfs3 code since it's mentioned so many times (and in nearly all the call traces)? Or would it look the same with any other file system? I just want to know if ReiserFS v3 is getting so old that I should bother reformatting, just to get rid of this. If this call trace would be produced with other filesystems too, I guess it's not resolvable by changing FS. I've been running this system for years now daily, and I never had this problem before. Not running a bleeding edge kernel either, just the standard Ubuntu 10.10 (3.0.0-14).
Is this task freeze related to ReiserFS (v3)? (or: is ReiserFS getting so old that it should be actively replaced?)
You need to define a breakpoint, for example break mainThen runand gdb will start your program and stop when it enters main.
I have installed gdb and added -g option to my compilation command, but when I try (gdb) s or (gdb) n it says: The program is not being run.It only works when I try (gdb) r and goes and stops where my program stops because of it's error(that I could see this without gdb in command line). How should I trace line-by-line my code?
How to trace line by line by "gdb" C/C++ code? [closed]
This log is a warning from the kernel indicating there's probably a bug somewhere in the kernel code. It's telling you that something is trying to free IRQ 18, even though it's already been freed before. As you have this repeatedly, it suggests something is trying to free it again and again. The Comm: field shows it's while running the ifconfig command, in other words, configuring a network interface. The call stack also indicates part of the code is in the atl1c kernel module, which I believe is used for the Atheros L1C Gigabit Ethernet, which is a network adapter. So there seems to be enough correlation in there to suggest a bug somewhere in the driver for that device. On the other hand, the kernel messsage is just a "WARNING", so it's possible that it's not really a bug (or a critical bug that would cause problems) but more of a problem with the design of the driver (maybe it's freeing the IRQ repeatedly for some reason...) Searching for some of these terms in Google didn't really turn up much... So not sure if this is a know problem or not. Kernel 3.10 is quite old... It's possible this has been fixed since, so maybe trying a newer kernel (if that's an option for you) might be a good idea.
Could anyone give some insight on what these warnings mean in the below dmesg log? Do they mean that the atl1c module is failing to load completely or merely a warning? Any information that helps understanding the log better would be appreciated. Thank you. [Wed Apr 4 19:51:54 2018] ------------[ cut here ]------------ [Wed Apr 4 19:51:54 2018] WARNING: at kernel/irq/manage.c:1244 __free_irq+0xa7/0x200() [Wed Apr 4 19:51:54 2018] Trying to free already-free IRQ 18 [Wed Apr 4 19:51:54 2018] Modules linked in: button ax88179_178a usbnet tg3 r8169 cnic bnx2 vmxnet3 pcnet32 e1000 sfc netxen_nic qlge qlcnic qla3xxx pch_gbe ptp_pch sky2 skge jme ipg uio alx atl1c atl1e atl1 libphy mii exfat(O) btrfs zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) r8168(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca fuse vfat fat glue_helper lrw gf128mul ablk_helper sha512_generic arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand acpi_cpufreq mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot [Wed Apr 4 19:51:54 2018] crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd mpt3sas mpt2sas(O) megaraid_sas ata_piix mptctl mptsas mptspi mptscsih mptbase scsi_transport_spi megaraid megaraid_mbox megaraid_mm vmw_pvscsi BusLogic usb_storage xhci_hcd uhci_hcd ohci_hcd ehci_pci ehci_hcd usbcore usb_common el000(O) [Wed Apr 4 19:51:54 2018] CPU: 0 PID: 8840 Comm: ifconfig Tainted: P WC O 3.10.102 #15266 [Wed Apr 4 19:51:54 2018] Hardware name: System manufacturer System Product Name/M4A78LT-M-LE, BIOS 0704 06/11/2010 [Wed Apr 4 19:51:54 2018] ffffffff814a9ec3 ffffffff81032ee6 0000000000000000 ffff8800c74b3cf8 [Wed Apr 4 19:51:54 2018] ffff8800caa4da80 0000000000000012 ffff8800caa4db0c ffffffff81032f47 [Wed Apr 4 19:51:54 2018] ffffffff816f5e60 ffff880000000020 ffff8800c74b3d08 ffff8800c74b3cc8 [Wed Apr 4 19:51:54 2018] Call Trace: [Wed Apr 4 19:51:54 2018] [<ffffffff814a9ec3>] ? dump_stack+0xc/0x15 [Wed Apr 4 19:51:54 2018] [<ffffffff81032ee6>] ? warn_slowpath_common+0x56/0x70 [Wed Apr 4 19:51:54 2018] [<ffffffff81032f47>] ? warn_slowpath_fmt+0x47/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffffa0aacc25>] ? atl1c_alloc_rx_buffer+0x385/0x410 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffff8109c837>] ? __free_irq+0xa7/0x200 [Wed Apr 4 19:51:54 2018] [<ffffffff8109ca0f>] ? free_irq+0x2f/0xa0 [Wed Apr 4 19:51:54 2018] [<ffffffffa0aadddc>] ? atl1c_up+0x9c/0x190 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffffa0aab396>] ? atl1c_free_irq+0x16/0x40 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffffa0aae458>] ? atl1c_open+0x288/0x2e0 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffff813ef92e>] ? __dev_open+0x7e/0xe0 [Wed Apr 4 19:51:54 2018] [<ffffffff813efb81>] ? __dev_change_flags+0x91/0x140 [Wed Apr 4 19:51:54 2018] [<ffffffff813efcb2>] ? dev_change_flags+0x12/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffff8144a12c>] ? devinet_ioctl+0x5fc/0x6c0 [Wed Apr 4 19:51:54 2018] [<ffffffff810296e8>] ? __do_page_fault+0x1b8/0x480 [Wed Apr 4 19:51:54 2018] [<ffffffff813d599b>] ? sock_do_ioctl+0x1b/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffff813d5dc8>] ? sock_ioctl+0x1d8/0x2a0 [Wed Apr 4 19:51:54 2018] [<ffffffff8110d0fe>] ? do_vfs_ioctl+0x20e/0x880 [Wed Apr 4 19:51:54 2018] [<ffffffff81238f86>] ? apparmor_cred_prepare+0x26/0x40 [Wed Apr 4 19:51:54 2018] [<ffffffff810f3417>] ? SyS_faccessat+0x197/0x2c0 [Wed Apr 4 19:51:54 2018] [<ffffffff8110d7f0>] ? SyS_ioctl+0x80/0xa0 [Wed Apr 4 19:51:54 2018] [<ffffffff814afeb2>] ? system_call_fastpath+0x16/0x1b [Wed Apr 4 19:51:54 2018] ---[ end trace 14fdad943159d686 ]--- [Wed Apr 4 19:51:54 2018] ------------[ cut here ]------------ [Wed Apr 4 19:51:54 2018] WARNING: at kernel/irq/manage.c:1244 __free_irq+0xa7/0x200() [Wed Apr 4 19:51:54 2018] Trying to free already-free IRQ 18 [Wed Apr 4 19:51:54 2018] Modules linked in: button ax88179_178a usbnet tg3 r8169 cnic bnx2 vmxnet3 pcnet32 e1000 sfc netxen_nic qlge qlcnic qla3xxx pch_gbe ptp_pch sky2 skge jme ipg uio alx atl1c atl1e atl1 libphy mii exfat(O) btrfs zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) r8168(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca fuse vfat fat glue_helper lrw gf128mul ablk_helper sha512_generic arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand acpi_cpufreq mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot [Wed Apr 4 19:51:54 2018] crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd mpt3sas mpt2sas(O) megaraid_sas ata_piix mptctl mptsas mptspi mptscsih mptbase scsi_transport_spi megaraid megaraid_mbox megaraid_mm vmw_pvscsi BusLogic usb_storage xhci_hcd uhci_hcd ohci_hcd ehci_pci ehci_hcd usbcore usb_common el000(O) [Wed Apr 4 19:51:54 2018] CPU: 0 PID: 8840 Comm: ifconfig Tainted: P WC O 3.10.102 #15266 [Wed Apr 4 19:51:54 2018] Hardware name: System manufacturer System Product Name/M4A78LT-M-LE, BIOS 0704 06/11/2010 [Wed Apr 4 19:51:54 2018] ffffffff814a9ec3 ffffffff81032ee6 0000000000000000 ffff8800c74b3cf8 [Wed Apr 4 19:51:54 2018] ffff8800caa4da80 0000000000000012 ffff8800caa4db0c ffffffff81032f47 [Wed Apr 4 19:51:54 2018] ffffffff816f5e60 ffff880000000020 ffff8800c74b3d08 ffff8800c74b3cc8 [Wed Apr 4 19:51:54 2018] Call Trace: [Wed Apr 4 19:51:54 2018] [<ffffffff814a9ec3>] ? dump_stack+0xc/0x15 [Wed Apr 4 19:51:54 2018] [<ffffffff81032ee6>] ? warn_slowpath_common+0x56/0x70 [Wed Apr 4 19:51:54 2018] [<ffffffff81032f47>] ? warn_slowpath_fmt+0x47/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffffa0aacc25>] ? atl1c_alloc_rx_buffer+0x385/0x410 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffff8109c837>] ? __free_irq+0xa7/0x200 [Wed Apr 4 19:51:54 2018] [<ffffffff8109ca0f>] ? free_irq+0x2f/0xa0 [Wed Apr 4 19:51:54 2018] [<ffffffffa0aadddc>] ? atl1c_up+0x9c/0x190 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffffa0aab396>] ? atl1c_free_irq+0x16/0x40 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffffa0aae458>] ? atl1c_open+0x288/0x2e0 [atl1c] [Wed Apr 4 19:51:54 2018] [<ffffffff813ef92e>] ? __dev_open+0x7e/0xe0 [Wed Apr 4 19:51:54 2018] [<ffffffff813efb81>] ? __dev_change_flags+0x91/0x140 [Wed Apr 4 19:51:54 2018] [<ffffffff813efcb2>] ? dev_change_flags+0x12/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffff8144a12c>] ? devinet_ioctl+0x5fc/0x6c0 [Wed Apr 4 19:51:54 2018] [<ffffffff813d599b>] ? sock_do_ioctl+0x1b/0x50 [Wed Apr 4 19:51:54 2018] [<ffffffff813d5dc8>] ? sock_ioctl+0x1d8/0x2a0 [Wed Apr 4 19:51:54 2018] [<ffffffff810d57f1>] ? vma_rb_erase+0x121/0x260 [Wed Apr 4 19:51:54 2018] [<ffffffff8110d0fe>] ? do_vfs_ioctl+0x20e/0x880 [Wed Apr 4 19:51:54 2018] [<ffffffff810d53ed>] ? remove_vma+0x4d/0x60 [Wed Apr 4 19:51:54 2018] [<ffffffff810d7399>] ? do_munmap+0x359/0x4d0 [Wed Apr 4 19:51:54 2018] [<ffffffff8110d7f0>] ? SyS_ioctl+0x80/0xa0 [Wed Apr 4 19:51:54 2018] [<ffffffff814afeb2>] ? system_call_fastpath+0x16/0x1b [Wed Apr 4 19:51:54 2018] ---[ end trace 14fdad943159d687 ]--- [Wed Apr 4 19:51:54 2018] ip_tables: (C) 2000-2006 Netfilter Core Team [Wed Apr 4 19:51:54 2018] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [Wed Apr 4 19:51:54 2018] ip6_tables: (C) 2000-2006 Netfilter Core Team [Wed Apr 4 19:51:55 2018] ------------[ cut here ]------------ [Wed Apr 4 19:51:55 2018] WARNING: at kernel/irq/manage.c:1244 __free_irq+0xa7/0x200() [Wed Apr 4 19:51:55 2018] Trying to free already-free IRQ 18 [Wed Apr 4 19:51:55 2018] Modules linked in: nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables button ax88179_178a usbnet tg3 r8169 cnic bnx2 vmxnet3 pcnet32 e1000 sfc netxen_nic qlge qlcnic qla3xxx pch_gbe ptp_pch sky2 skge jme ipg uio alx atl1c atl1e atl1 libphy mii exfat(O) btrfs zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) r8168(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca fuse vfat fat glue_helper lrw gf128mul ablk_helper sha512_generic arc4 cryptd ecryptfs sha256_generic sha1_generic ecb [Wed Apr 4 19:51:55 2018] aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand acpi_cpufreq mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd mpt3sas mpt2sas(O) megaraid_sas ata_piix mptctl mptsas mptspi mptscsih mptbase scsi_transport_spi megaraid megaraid_mbox megaraid_mm vmw_pvscsi BusLogic usb_storage xhci_hcd uhci_hcd ohci_hcd ehci_pci ehci_hcd usbcore usb_common el000(O) [Wed Apr 4 19:51:55 2018] CPU: 1 PID: 10035 Comm: ifconfig Tainted: P WC O 3.10.102 #15266 [Wed Apr 4 19:51:55 2018] Hardware name: System manufacturer System Product Name/M4A78LT-M-LE, BIOS 0704 06/11/2010 [Wed Apr 4 19:51:55 2018] ffffffff814a9ec3 ffffffff81032ee6 0000000000000000 ffff8800c97dbcf8 [Wed Apr 4 19:51:55 2018] ffff8800caa4da80 0000000000000012 ffff8800caa4db0c ffffffff81032f47 [Wed Apr 4 19:51:55 2018] ffffffff816f5e60 ffff880000000020 ffff8800c97dbd08 ffff8800c97dbcc8 [Wed Apr 4 19:51:55 2018] Call Trace: [Wed Apr 4 19:51:55 2018] [<ffffffff814a9ec3>] ? dump_stack+0xc/0x15 [Wed Apr 4 19:51:55 2018] [<ffffffff81032ee6>] ? warn_slowpath_common+0x56/0x70 [Wed Apr 4 19:51:55 2018] [<ffffffff81032f47>] ? warn_slowpath_fmt+0x47/0x50 [Wed Apr 4 19:51:55 2018] [<ffffffffa0aacc25>] ? atl1c_alloc_rx_buffer+0x385/0x410 [atl1c] [Wed Apr 4 19:51:55 2018] [<ffffffff8109c837>] ? __free_irq+0xa7/0x200 [Wed Apr 4 19:51:55 2018] [<ffffffff8109ca0f>] ? free_irq+0x2f/0xa0 [Wed Apr 4 19:51:55 2018] [<ffffffffa0aadddc>] ? atl1c_up+0x9c/0x190 [atl1c] [Wed Apr 4 19:51:55 2018] [<ffffffffa0aab396>] ? atl1c_free_irq+0x16/0x40 [atl1c] [Wed Apr 4 19:51:55 2018] [<ffffffffa0aae458>] ? atl1c_open+0x288/0x2e0 [atl1c] [Wed Apr 4 19:51:55 2018] [<ffffffff813ef92e>] ? __dev_open+0x7e/0xe0 [Wed Apr 4 19:51:55 2018] [<ffffffff813efb81>] ? __dev_change_flags+0x91/0x140 [Wed Apr 4 19:51:55 2018] [<ffffffff813efcb2>] ? dev_change_flags+0x12/0x50 [Wed Apr 4 19:51:55 2018] [<ffffffff8144a12c>] ? devinet_ioctl+0x5fc/0x6c0 [Wed Apr 4 19:51:55 2018] [<ffffffff810296e8>] ? __do_page_fault+0x1b8/0x480 [Wed Apr 4 19:51:55 2018] [<ffffffff813d599b>] ? sock_do_ioctl+0x1b/0x50 [Wed Apr 4 19:51:55 2018] [<ffffffff813d5dc8>] ? sock_ioctl+0x1d8/0x2a0 [Wed Apr 4 19:51:55 2018] [<ffffffff8110d0fe>] ? do_vfs_ioctl+0x20e/0x880 [Wed Apr 4 19:51:55 2018] [<ffffffff81238f86>] ? apparmor_cred_prepare+0x26/0x40 [Wed Apr 4 19:51:55 2018] [<ffffffff810f3417>] ? SyS_faccessat+0x197/0x2c0 [Wed Apr 4 19:51:55 2018] [<ffffffff8110d7f0>] ? SyS_ioctl+0x80/0xa0 [Wed Apr 4 19:51:55 2018] [<ffffffff814afeb2>] ? system_call_fastpath+0x16/0x1b [Wed Apr 4 19:51:55 2018] ---[ end trace 14fdad943159d688 ]---
Understanding dmesg warning
You may enable tracing for the appropriate command only, instead of doing it for the whole pipeline. Each command in a multi-command pipeline runs in its own subshell execution environment (except for the last one, in Bash, when the lastpipe option is in effect) and different set options can be applied to each of them. E.g., generalizing your code a bit, to have +tee not printed out in $ set -x; echo foo | tee; { set +x; } 2>/dev/null + echo foo + tee fooa simple way could be $ { set -x; echo foo; } | tee + echo foo foo
I have the following in a script: #!/bin/bashlogFile='script.log' echo -n > $logFile log="tee -a $logFile"set -x scp ... user@host:... ssh user@host " echo '...message...' " 2>&1 | $log { set +x ;} 2> /dev/null # avoids trace output of '++ set +x'The output is: ++ ssh user@host ' echo '\''> ...message...'\'' ' ++ tee -a script.log > ...message...Can the ++ tee ... trace line be suppressed somehow, as well?
Avoid printing of 'tee' trace line
From ftrace documentation:trace_pipe: The output is the same as the "trace" file but this file is meant to be streamed with live tracing. Reads from this file will block until new data is retrieved. Unlike the "trace" file, this file is a consumer. This means reading from this file causes sequential reads to display more current data. Once data is read from this file, it is consumed, and will not be read again with a sequential read. The"trace" file is static, and if the tracer is not adding more data, it will display the same information every time it is read. This file will not disable tracing while being read.Or use trace-cmd(1), and specifically trace-cmd-show. From its manpage:-p                        (lower case 'P') Instead of displaying the contents of the "trace" file, use the "trace_pipe" file. Thedifference between the two is that the "trace" file is static. That is, if tracing is stopped, the "trace" file will show the same contents each time. The "trace_pipe" file is a consuming read, where a read of the file will consume the output of what was read and it will not read the same thing asecond time even if tracing is stopped. This file als will block. If no data is available, trace-cmd show will stop and wait for data to appear.Also see Knio's answer to How to set the buffer size for trace_pipe in ftrace? (onStack Overflow) tosee how you can change thesize andbehavior of thetrace file.
When setting up the function_graph tracer in Linux (Ubuntu 18), the trace that is stored at /sys/kernel/debug/tracing/trace only stores a couple of seconds before overwriting itself. As the period might be variable, I cannot be saving it with for example cat /sys/kernel/debug/tracing/trace >> total_trace Because it might produce duplicates which are not acceptable during postprocessing. Even worst, it might miss some information. Is there a way I can open the file and just pipe all the new incoming info to another one? Thanks
/sys/kernel/debug/tracing/trace overwrites itself after a few seconds. How can I collect it without duplicates?
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable. There is no Trash can for rm, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus. Files are usually moved to somewhere like ~/.local/share/Trash/files/ when trashed. The rm command on UNIX/Linux is comparable to del on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
Recently I accidentally did rm on a set of files and it got me thinking where exactly these files end up? That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm and is there a way of undoing an rm command?
Where do files go when the rm command is issued?
You can find the answers in the The FreeDesktop.org Trash specification: Some excerpts:$topdir/.Trash-$uid directory is to be used as the user's trash directory for this device/partition. $uid is the user's numeric identifier. The $trash/files directory contains the files and directories that were trashed. The $trash/info directory contains an “information file” for every file and directory in $trash/files.There is no reference to expunge directory, though.
I have a DNS-321 NAS drive which apparently runs Linux, so I logged in via ssh and I see .Trash-500 and .Trash-1000 directories. I realize that they're for trash, but why do I need two? Inside of them I see expunged, files, and info directories -- what are those for?
What are the .Trash-500 and .Trash-1000 directories?
I think you're looking for the vfs_recycle module to Samba.vfs_recycle - Samba VFS recycle bin | samba.orgIn your smb.conf file for a given share: [share] Path = /data/share vfs objects = recycle recycle:repository = .recycle recycle:keeptree = yes recycle:versions = yes
I have a couple of Samba shares on my NAS running Debian Squeeze which I access through Windows 7. Deleting any of the files from Windows will completely delete the files from the NAS server. Is there anyway to have a similar feature installed like the Trash/Recycle Bin? It would be nice if there even was a similar concept as Delete, file goes to trash and is deleted, ShiftDelete, file is deleted and no keep copy is kept anywhere. Is there such thing I could install?
How can I have a Trash/Recycle Bin for my Samba shares?
Under KDE you can hit shift+del to directly delete selected files (or directories). Or you can press shift while chosing 'move to trash ...' in the context menu, which has the same effect. IIRC this also works under Windows. Probably there is some trash-properties dialog under KDE to globally disable the trash feature. It is possible to configure it in Dolphin, but perhaps there is also a more general solution in KDE available.
From what I gather, the idea of a "trash can" is of Windows descent, and was to make a user's life easier. However, when I go to delete a file, I don't hit delete unless I know I don't need it and will never need it ever again. Period. I'm currently running OpenSuse and the trash can is a confusing "feature" (as I can't seem to find WHERE it is) that sometimes even creates cute little directories on my flash drives for trash. Basically put, I don't like the trash can idea. Is there a way I can "turn it off"? I'm assuming it's a filesystem thing, so it might be harder to do then I predict. Basically, I would like to perform a rm -rf on the file that is selected (-r in case it is a directory). Is this at all possible?
How to disable the trash can in KDE
Have a look at this article. According to the article, Ubuntu will create such folders when a file is deleted from a USB drive. Presumably this would allow a file to be restored if you accidentally deleted it. It contains the following solution:Don't use the delete button only (Otherwise the .Trash-1000 folder will be created) Press the key combination shift+delete together to delete then Ubuntu won't create a .Trash-1000 folder. (Note: If you delete files and folders this way they are gone forever!)As alternative you can also use the command line's rm command which will also delete the file directly.
I have a 32 GB USB flash drive. When deleting files from the drive where the drive is plugged into a Ubuntu 16 laptop, it creates a folder called '.Trash-1000' This .Trash-1000 folder contains two folders which are 'file' and 'info' where file contains the files I have deleted and info contains metadata about those files. The issue is this .Trash-1000 folder takes up space because it holds a copy of the deleted file. I then have to eventually delete the .Trash-1000 folder when it starts filing up after multiple deletes. Is there a way to disable this feature on the USB drive?
How to disable creation of .Trash-1000 folder?
You need to install gvfs to get PCManFM's Trash Can to work. It stores the files in the FreeDesktop standard location: ~/.local/share/Trash/files
I've got PCManFM and Xfe as graphical file managers in my Arch Linux with Openbox. When I click on the Trash link in PCManFM I get an error saying "Operation not supported". Question: Where do PCManFM and xfe put files you sent to the Trash? Thanks.
Where is the trash directory for PCManFM and xfe?
Those files you removed may actually still be opened by another process. In that case the file space will become available when that process closes it's handle to the file. You can lookup these files with lsof: lsof |grep "var/log"|grep deleted
I use trash-put to trash files from command line. Recently, I had aliased my rm command to trash-put so that I don't accidentally delete something important. However, what happened now was that I had to delete some files from my /var/log folder to free up some space on the / filesystem. I did this using sudo: sudo rm /var/log/somelog#Above command is equivalent to: sudo trash-put /var/log/somelogAfter doing this, there was no free space recovered on the partition since the files must have moved to some trash-can. However, when I checked my trash-can there were no files. I tried to see if there was .Trash-100 folder on the / partition, but even that was not there. So, where did my trashed file go? And how do I find it so that I can decimate it to recover some space?
Where is the trash can for root partition?
trash:// is a protocol, not a location. A post on AskUbuntu says it should be in ~/.local/share/Trash. Try there.
It looks like I can get to the waste-basket through nautilus, but when I look at the location given by properties, I see "trash:///". But I can't "cd trash:///". Where is the waste-basket? And in general, if I can find a file in nautilus, how do I get there from terminal? I've had some similar issues in the past with mounted media as well, so a general answer would be greatly appreciated. In case it is relevant, I'm using PinguyOS.
How to find Nautilus wastebasket in the file system
This is used by programs which comply with the FreeDesktop.org Trash specification. rm doesn’t follow this specification, but many current desktop environments do: instead of deleting files outright, they move them to the appropriate trash directory, thus allowing them to be “undeleted” if necessary. On the command-line, one tool which can be used is gio trash; gio trash ${file} will move ${file} to the trash, and gio trash --empty will empty the trash. So if you wish you could make rm a function based on gio trash.
I am using RHEL8, and I see the directory - ~/.local/share/Trash/files And there were so many files in it. Looking at the name and files present there it gave me an intuition that it is similar to recycle bin of Windows OSes. Just tried playing around with it, and appeared like the files deleted through File manager only comes into Trash - ~/.local/share/Trash/files and not when I deleted them using the rm command. Why is it so? Am i missing something here. I tried googling to get more information regarding this but none of them gave a satisfactory answer. Can I get more understanding of this directory - ~/.local/share/Trash/files ?
Is ~/.local/share/Trash/files used by GNOME only for deleted files and not by rm command?
Move them to /tmp and restart (Or empty /tmp in any other way). I've had this problem before and it's been to only way for me to fix it. Before doing this you should investigate into why they aren't deleted but use it as a last resort. There are no dangers to doing this but I recommend trying to learn why they aren't deleted before.
So here is my problem I have a pretty strange file that I can't get out of my computer. I tried pretty much everything. First I used software like CleanMyMac, TrashIt, CCleaner etc... None of them was able to do anything. I tried to throw it to the trash and "secure trash it". It did not complete, so I finally aborted it. I then tried to use the terminal with a sudo rm -R but no: still there. The file is untrashable because it seems not to exist... Here is a view of what I got: sh-3.2# sudo rm -R /Users/Mawel/.Trash/Tunnelblick.app rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents/Frameworks/Sparkle.framework/Resources: No such file or directory rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents/Frameworks/Sparkle.framework/Versions/Current: No such file or directory rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents/Frameworks/Sparkle.framework/Versions: Directory not empty rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents/Frameworks/Sparkle.framework: Directory not empty rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents/Frameworks: Directory not empty rm: /Users/Mawel/.Trash/Tunnelblick.app/Contents: Directory not empty rm: /Users/Mawel/.Trash/Tunnelblick.app: Directory not emptyIs it basically telling me that there is nothing there and I can't delete nothing? EDIT I tried some of your answers especially the one where I force the remove of each file one by one. Here is a sample of what happen : sh-3.2# cd Sparkle.framework/ sh-3.2# ls -al total 8 drwxr-xr-x@ 4 root wheel 136 22 jan 2014 . drwxr-xr-x@ 3 root wheel 102 17 jan 2014 .. lrwxrwxrwx@ 1 root wheel 26 22 jan 2014 Resources -> Versions/Current/Resources drwxr-xr-x@ 3 root wheel 102 9 fév 2014 Versions sh-3.2# sudo rm -f Resources sh-3.2# ls -al total 8 drwxr-xr-x@ 4 root wheel 136 22 jan 2014 . drwxr-xr-x@ 3 root wheel 102 17 jan 2014 .. lrwxrwxrwx@ 1 root wheel 26 22 jan 2014 Resources -> Versions/Current/Resources drwxr-xr-x@ 3 root wheel 102 9 fév 2014 Versions sh-3.2# EDIT 2 Thanks to DisplayName I finally manage to remove them by moving them to the /tmp folder and cleaning it.
Unable to delete a file whatever I do
So, the reason I couldn't restore it was simple, in this link https://specifications.freedesktop.org/trash-spec/trashspec-latest.html (thanks Abdullah!), it specifies under Contents of a trash directory thatWhen trashing a file or directory, the implementation MUST create the corresponding file in $trash/info first.I moved the file/folder before I created the info file, which in return caused the above problem to occur. Also, for anyone interested in finding out how your Trash/Recycle bin works, I really recommend reading that link posted.
So, I am busy writing myself a program to move and restore items from my trash (recycle bin), currently I can move it to the trash, but if I try to restore it from the trash using a file manager, it fails. The trash directory is located at ~/.local/share/Trash and it has 2 folders inside it, files/ to store the files you want to delete and info/, to store the data about the files. I got the file naming correct in the application, so that is not the problem, and I got the details that gets stored in the info/ folder correct as well. Yet, if I delete something with my file manager and then recover it from the trash, there is no problem, but when I move it to the trash with my code, it can only be recovered by manually moving it... Here are some examples of my application: $ nrash todelete.txt > Moved to trashThe above creates this: (inside Trash/files) todelete.txt(inside Trash/info) todelete.txt.trashinfoInside the .trashinfo file, the following resides: [Trash Info] Path=/home/netsu/Documents/todelete.txt DeletionDate=2021-10-08T18:24:04In a .trashinfo for a file deleted from the filemanager instead, you would find this: [Trash Info] Path=/home/netsu/Documents/pp.txt DeletionDate=2021-10-08T18:25:02The two are exactly the same, yet don't work the same... Is there something I'm missing about moving things to the trash? Why can the file that was placed there by the file manager be restored, but the one I placed there can not? Is this the universal way the trash works on Linux distros, or does it change depending on distro/desktop environment? Note: My file does appear inside Trash when I open it in the file manager, but I can't restore it.Distro: Solus DE: MATE File Manager: Caja
How does the trash directory work?
This is a bit late, but I ran into the same issue. As it turns out, you have to disable the 'Erase files on removable media instead of "trash can" creation' preference. Apparently PCManFM sees any drives with an unmount button as removable media. Once that's done, sending files to trash works as expected.
I have a HDD which I mount on /mnt/sda1 at startup (in /etc/fstab) Whenever I want to send a file to the trash in pcmanfm, I get the following message : Some files cannot be moved to trash can because the underlying file systems don't support this operation. Do you want to delete them instead?The owner of /mnt/sda1 and /mnt/sda1/.Trash-1000 is user 1000 (me), and I have read write permissions. When deleting a file in the CLI using gvfs-trash or gio trash it correctly sends the file to /mnt/sda1/.Trash-1000, and pcmanfm sees the file in the Trash and can even restore it. Still it cannot delete it. Any clues ?
pcmanfm doesn't send files to trash on external drive
OK. I figured out myself: its necessary to install package gvfs - and trash returns back.So if you want to remove trash from thunar Xfce (or caja Mate) - you should remove that package or it's better to remove only properties file, because gvfs affects not only trash: /usr/share/gvfs/mounts/trash.mountGNOME Virtual file system - gvfs - also affects on: admin, burn, cdda, computer, dav, dav+sd, dns-sd, ftp, ftps, http, localtest, network, recent, sftp So I need it in my installation process - it's not completed, I guess..
Installation by steps:RHEL minimal (prompt only & internet connection) X Window system epel-release Xfce (+some plugins)After all I can't find Trash folder. I guess I forgot to install something, but I can't guess the package name to enable Trash mechanism. So the question is: How to enable Trash folder mechanism in Xfce?
Missing Trash mechanism in Xfce after manual RHEL installation
According to my tests it should be equivalent to empty trash contents every 30 days This is what I did: At 1:45 PM I cleared the trash and set the privacy options to purge after 1 hour and rebooted the machine, just to be sure. I then deleted two (several months old) files and executed a script that creates a file and moves it to trash - every 12 minutes. At 2:42 PM I accessed all files in trash and edited some of them. At 2:45 PM they were all gone. Half an hour later I created two files, moved them to trash and again edited them while being in the trash. At 3:45 PM the trash was empty again. I've concluded the system clears the trash every hour removing the files regardless of their age, the time they've stayed in the trash, their access or modification time. Anyway, you could always ask a question on GNOME mailing list though as you can see from this bug report the devs themselves appear to be quite confused about it:This isn't clear. Does it clear the whole trash:/// contents every X days? Or does it remove trash files older than a certain age? I'm guessing the latter, so I would call it "remove-old-trash-files".
In GNOME 3.18 Settings > Privacy > Purge Trash & Temporary Files, one can configure the desktop environment to automatically empty Trash "after 30 days".The description here isn't very clear. Is "30 days" supposed to be 30 days being in the Trash, 30 days after last opening/changing the file, or would all files be deleted together after 30 days without any new activity or new file in the Trash?
According to what does GNOME purge the trash "30 days later"?
Hey guys I've found the solution, removing my old .Trash folder that was there but wasn't working: sudo rm -rf /media/FILES/.Trash-1000worked like a charm, I'm now able to move to Trash from nautilus. And I'm pretty sure that If I create a new user he will be able to have its own trash too.
I'm currently trying to able the Trash feature in a NTFS partition mounted automatically on boot. To do that I'm using the permissions option in my fstab: UUID=1CACB8ABACB88136 /media/FILES ntfs defaults,permissions,relatime 0 0then I changed the permissions: sudo chown :users -R /media/FILES/ sudo chmod g+rwx -R /media/FILES/It works great except I continue to not have the trash feature. I can read, write, execute being member of the users group but I cannot use the Trash feature in Nautilus, only permanent delete. Any thoughts ? BR
How can I enable Trash feature in a NTFS partition with permissions?
Moving a file to Trash does not delete the file - it simply stores it in a folder named "Trash" - you can retrieve files moved there so you don't get a prompt checking if you want to delete them when moving them there (as you are simply storing them somewhere else in the filesystem, not deleting them). By selecting "Ask before emptying Trash or deleting files" you are setting a prompt when you actually delete them, either by emptying the trash, or by selecting the file and hitting ShiftDelete. So, no - it's not a bug, it's a feature.
I am using Fedora 14 (Laughlin) with GNOME 2.32.0. I have gconf-editor installed. In gconf-editor apps->nautilus->preference confirm_trash is enabled i.e., checked, also in nautilus file browser, home folder, Edit->Preference->Behaviour "Ask before emptying the Trash or deleting files" is checked. But I am not getting any confirm box while deleting any file or folder. I googled for the solution but came up with nothing. Is this a Bug? Is it solvable? Thank you.
Move to trash confirm box
There is no such place for a fairly straightforward reason: there isn't any use for one. There is no situation I can think of where you can use mv globally but not rm. So while /dev/null exists because it meets a need, /dir/garbage does not because it would not. Of course, you could say there is a need in this situation, to do with the issue: "what is the most efficient way to achieve this (in terms of lines of code to edit)?" Presuming the mechanism determining which files should be moved vs. deleted is whether target_dir is defined or not: if [ -n "$target_dir" ]; then # move file else # delete file fiPretty simple, and you don't have to re-edit anything (I presume by "edit" you didn't mean something about maintenance or reducing the number of lines, because doing something in a an awkward, hack, or incorrect manner in the name of "simpler maintenance" -- or worse yet, pseudo-cleverness -- is oxymoronic). That said, you could fairly easily implement your own garbage directory by running a cron job like this once an hour: garbage_dir=/usr/local/garbage for file in $garbage_dir/*; do rm -f -R $file; done
I can (redirect) textual output to /dev/null in order to get rid of it. This is quite handy because this way I can easily enable/disable it in my bash code. Now I have a script that processes files, and after it is done with them, moves them to another destination. An simple example would look like this: source_dir=some/source/path target_dir=some/target/path for f in `ls $source_dir`; do echo $f; mv $f $target_dir; doneNow when I decide that I'm no longer interested in moving the files to the target dir, but to delete the files directement, what is the most efficient way to achieve this (in terms of lines of code to edit)? I can think of putting the mv command into a variable as well: process_cmd=echo finnish_cmd=mv source_dir=some/source/path target_dir=some/target/path for f in `ls $source_dir`; do $process_cmd $f; $finnish_cmd $f $target_dir; doneThen I could later change to something like: process_cmd=echo finnish_cmd=rm source_dir=some/source/path target_dir= for f in `ls $source_dir`; do $process_cmd $f; $finnish_cmd $f $target_dir; doneBut I thought it would be even simpler, if it was possible to specify a /dev/null-like trash target directory that is automatically cleaned up eventually (that is sort of rm semantics with the mv command): process_cmd=echo finnish_cmd=mv source_dir=some/source/path target_dir=/dev/null # <--- is there some sort of trash folder for such tasks? for f in `ls $source_dir`; do $process_cmd $f; $finnish_cmd $f $target_dir; doneIs there a definitive way to go / a general best practice for a case like this?
/dev/null-like automatic trash directory for file moves?
Finally I found the reason. This bug is related to lightDM starting awesome without dbus-launch. I fixed the whole problem described here by editing by hand the file /usr/share/xsessions/awesome.desktop as: [Desktop Entry] Name=awesome Comment=Highly configurable framework window manager TryExec=awesome Exec=dbus-launch --exit-with-session --sh-syntax awesome Type=ApplicationThis is not a very pleasant solution and neither a good one, since I editing that file this thing will be mess-up when I got a new update of awesome and /usr/share/xsession/awesome.desktop be overwritten. Looking forward for better solutions, but for now, only for now, this is working pretty fine. Trash appears now on thunar, xfdesktop and x-gvfs-show-partitions it's working as expected.
TL;DR: custom partitions and trash is not showing on Thunar via AwesomeVM. XFCE:Awesome:My fstab is: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a device; this may # be used with UUID= as a more robust way to name devices that works even if # disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> UUID=619542b0-8ce0-4dd1-9b0b-2d6224aa4f98 swap swap defaults,noatime 0 0 UUID=c49099f0-a6d7-4732-b41d-c34a7246019c / ext4 defaults,noatime 0 1 # /dev/sdb1 (games) UUID=01CF50ED2AA59680 /mnt/games ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0 # /dev/sdb2 (data) UUID=3F2BFCA2397DA8FB /mnt/data ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0 # /dev/sda4 (docs) UUID=56D6C95328FD7038 /mnt/docs ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0# /dev/sdb3 (extra) UUID=57b79234-ae2a-4206-9e53-95e6a6009fd5 /mnt/extra ext4 defaults,rw,x-gvfs-show 0 1BTW, I know that this is in some way related to gvfs, which I already have running when I logged through AwesomeVM, but I need some more? I don't understand why don't works.
How to get the trash and x-gvfs-show partitions on Thunar via AwesomeVM?
I once had a similar issue trying to track down what was taking up space on my root partition but not being reported by Baobab, a disk usage analyzer. In the end I found the files in my the trash folder of the root user, /root/.local/share/Trash. The reason Baobab and other utilities wouldn't show these files is because I was running them as a non-root user and they didn't have the necessary permissions to read /root folder. A quick su allowed me to enter the necessary directory and rm * the files away.
I am running Crunchbang, a Debian variant with OpenBox WM. I have deleted a large number of files via browsing in thunar file manager and pressing del key. They disappear from view. I then go to ~/.local/share/Trash/files/ and delete them there too. The filesystem still doesn't report the freed space though. df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb5 61G 57G 371M 100% /
Deleted files from home, deleted them from .local/share/Trash/files, System doesn't report back free space
Turns out .Trash-uid has to be with 700 permissions. Strangely, but it won't work with 777.
From KDE I use only dolphin. I've pulled out second hdd for a couple of days (also changed fstab). Now I put it back and dolphin won't "see" /hdd2/.Trash-1000/ and if I delete file /hdd2/file1 it moves to ~/.local/share/Trash/ but should move to /hdd2/.Trash-1000/ How do I point dolphin to use right trash-dir?
KDE-dolphin trashcan on second HDD
I think changing delete option at the beginning of the macro to ask-no and reseting to yes at the end should do the trick. folder-hook =Trash 'macro index <delete> "set delete=ask-no;<delete-message><sync-mailbox><change-folder>^<enter>set delete=yes"'
I have defined in Mutt two "trash" macros -- one for the Trash folder (just mark as deleted and sync) and one for the remaining folders (save into Trash and sync): folder-hook . 'macro index <delete> "s=Trash<enter><enter><sync-mailbox><change-folder>^<enter>"' folder-hook =Trash 'macro index <delete> "<delete-message><sync-mailbox><change-folder>^<enter>"'Together with set delete = yes this results in messages immediately being expunged, no questions asked. Such a behaviour is fine outside the Trash folder, however, I'd like Mutt to ask for confirmation before messages are deleted from Trash. The only workaround that I have come up with so far is not to sync in the macro (i.e., use only <delete-message> there), thus having to live with deleted messages being present until I sync manually. (Which is not ideal...) Is it possible to change this setup so that -- in Trash -- Mutt would ask for confirmation before actually marking the messages as deleted and expunging them? Thanks.
Mutt: ask before deleting messages from Trash
On a standard file system, erasing a file is just removing references to the data and making the space that holds the data available for reuse. When you move a file to another location on the same file system, the data stays on the same place on the disk. For the purpose of hiding things, this is a no-op. When you move a file to another file system, the system does a copy/erase. For the purpose of hiding things this is counter-productive. The initial data is still there on the original filesystem, it has just been marked erased. But now you have lost your reference to it, so you cannot overwrite it, unless you overwrite all the free space on the disk. But all of this is idle talk if we don't know what the threat is:someone with admin rights someone with physical access to the computer...or your backups another "normal" user on the computerYou can't do much against 1) because this person can install keyloggers, or make copies of your files before you erase them. For 2) Best defense is filesystem encryption (computer, and backups) Use the shred command, if the FS type and parameters are compatible with it Define a temporary file system (tmpfs) and destroy it when done. Save files directly to a removable device to which this person won't have accessFor 3), file access rights should be enough if handled properly.If you delete a file in an external drive:if you use rm in a shell, nothing is copied to your internal disk if you use a file manager:there can be a "hard-delete" command (no trash) (Shift-Del in my file manager) that doesn't move the file to trash (direct equivalent to rm) if you use the "soft-delete" that moves to trash, then it depends how the external drive is set up:if the drive has its own trash folder, the file is moved to the trash file on the external drive and is not copied to your internal drive. otherwise the file is copied to your general Trash folder on the internal drive, and erased on the external drive.
If I save a file (.odt,.txt, jpeg, etc) to my hdd (not ssd), move to trash, then cut /paste the files in the trash to a usb, would this hold up to the best forensic equipment trying to recover the files from my hdd afterwards? Basically, I'm trying not to leave a trace of the file, obviously. Also, I'm thinking that if I don't want to "trash/delete" the files, since then they 'are' recoverable, what about just moving them to a usb (after I no longer need them) and scrubbing the usb with the dd command: sudo dd if=/dev/urandom of=/dev/sdx bs=8192. So the question becomes, if I follow this method, are they still located on my hdd? I would test it with PhotoRec, but since I am NOT deleting the file(s), it doesn't seem like using PhotoRec would be a viable option here. Website article states the following: "If you cut a file or a folder, it does not disappear, but instead becomes transparent. When you paste that file/folder, it is moved to the new location and disappears from the original location. https://www.issco.unige.ch/en/research/tutoriel-informatique/EN/copy_cut_delete_move_and_paste.html
If I save file to my hdd (not ssd), move 2 trash, then cut /paste from trash to usb, will this hold up to forensic analysis designed to recover files?
Take a look in /root/.local/share/Trash/files. I ran Caja as root, created a file in /root, deleted it, poked around a bit, and found it in that folder. I was then able to cut & paste it from there back into /root.
I am with Linux Mint, I delete a file with caja, it is the file manager for Mate Desktop. Because caja is open as root, now the file which I delete is on the root Trash... I try as David Yockey to say,The First I to try to look is the File which delete rescuee.img but to say , Das Argument ist ungültig - Argument is invalid and keine Berechtigung - Not authority.. With the seconds, junk_file, I to come similar... My question, how can restore this file? can someone help me, thanks!
How to restore file from root Trash
If you open Trash with Thunar, you can sort the files by "Date Modified", which is the mtime of the file while it was alive. So you can't tell what was last deleted, in case you keep a lot of files into there. But when the file is deleted, or better moved to Trash, the ctime changes. So sorting them by ctime could help you: find ~/.local/share/Trash/files/ -type f -printf '%C+ %f\0' | sort -rz | tr '\0' '\n' | headThe output is sorted by change time, meaning by deletion time, the last deleted file is first into this output. Then you can select its filename into Thunar and restore it.Also, the files into Trash/files are regular files (for any regular file you have deleted), think that when you press delete into Thunar (without the shift key) a mv happens instead of an rm. And into Trash/info you can see that there is a *.trashinfo file, containing the path (for the case of restoration) and the Deletion Date. We could also parse this info to find the latest one. Or just sort the *.trashinfo files by mtime. For example, to print newer entries first: cd ~/.local/share/Trash/info ls -lt *.trashinfo
I want to restore I file I deleted accidentally. I cannot go to the trash/wastebasket folder and simply restore the file, because I've no idea what the file was. I deleted it by accidentally pressing the delete key while the screen focus was on a Thunar window. (This is a particular hazard when you have a mechanical keyboard and human fingers). Thunar does not show deleted dates, so it is not possible to simply sort the files in the trash/wastebasket folder. But if I right-click on a deleted file in the wastebasket, and look at files properties, I see that files in the wastebasket do have a "Deleted" date property. How do I list the files by deleted date in a terminal. And how do I open the wastebasket folder in a terminal? And if the wastebasket folder is virtual, and the file properties given a deleted date stored in some other place, where is that place and how do I query it? (I see that if I look in ~/.local/share/Trash/files/, it looks like the same files as are in the wastebasket that you navigate to from the Thunar side-panel icon. But the files in ~/.local/share/Trash/files/ do not have "Deleted" date as an attribute when you right-click to see their properties).
How lookup deletion date of files in Trash from command line